(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
ABSTRACT
RÉSUMÉ
ACKNOWL EDGMENTS
1 INTRODUCTION
1.1 Background and General Aim
1.2 ProbabilisticModels: from recognition to synthesis
1.3 Mapping by Demonstration: Concept and Contributions
1.4 Outline of the Dissertation
2 B ACKGROUND & RELATED WORK
2.1 Motion-SoundMapping: fromWires toModels
2.1.1 ExplicitMapping
2.1.2 Mapping with PhysicalModels
2.1.3 TowardsMapping by Example: PointwiseMaps and geometrical properties
2.1.4 Software forMapping Design
2.2 Mapping withMachine Learning
2.2.1 An Aside on Computer vision
2.2.2 Discrete Gesture Recognition
2.2.3 ContinuousGesture Recognition and TemporalMapping
2.2.4 Multilevel temporalmodels
2.2.5 Mapping through Regression
2.2.6 Software forMappingDesign withMachine Learning
2.3 InteractiveMachine Learning
2.3.1 InteractiveMachine Learning
2.3.2 Programming by Demonstration
2.3.3 InteractiveMachine Learning in ComputerMusic
2.4 Closing the Action-Perception Loop
2.4.1 Listening in Action
2.4.2 GeneralizingMotion-SoundMapping
2.4.3 MotivatingMapping-by-Demonstration .
2.5 Statistical Synthesis andMapping inMultimedia .
2.5.1 The case of Speech: from Recognition to Synthesis
2.5.2 Cross-ModalMapping from/to Speech
2.5.3 Movement Generation and Robotics
2.5.4 Discussion
2.6 Summary
3 MAPPING BY DEMONSTRATION
3.1 Motivation
3.2 Definition and Overview
3.2.1 Definition
3.2.2 Overview
3.3 Architecture and Desirable Properties
3.3.1 Architecture
3.3.2 Requirements for InteractiveMachine Learning
3.3.3 Properties of the ParameterMapping
3.3.4 ProposedModels
3.4 The notion of Correspondence
3.5 Summary and Contributions
4 PROBABILISTIC MOVEMENT MODELS
4.1 MovementModeling using GaussianMixtureModels
4.1.1 Representation
4.1.2 Learning
4.1.3 Number of Components andModel Selection
4.1.4 User-adaptable Regularization
4.2 Designing Sonic Interactions with GMMs
4.2.1 The Scratching Application
4.2.2 From Classification to Continuous Recognition
4.2.3 User-Defined Regularization
4.2.4 Discussion
4.3 MovementModeling using HiddenMarkovModels
4.3.1 Representation
4.3.2 Inference
4.3.3 Learning
4.3.4 Number of States andModel Selection
4.3.5 User-Defined Regularization
4.4 Temporal Recognition andMapping with HMMs
4.4.1 TemporalMapping
4.4.2 User-defined Regularization and Complexity
4.4.3 Classification and Continuous Recognition
4.4.4 Discussion
4.5 Segment-levelModeling with Hierarchical HiddenMarkov Models
4.5.1 Representation
4.5.2 Topology
4.5.3 Inference
4.5.4 Learning
4.5.5 Discussion
4.6 Segment-levelMapping with the HHMM
4.6.1 ImprovingOnlineGesture Segmentation and Recognition
4.6.2 A Four-Phase Representation ofMusical Gestures
4.6.3 SoundControl Strategies with Segment-levelMapping
4.7 Summary and Contributions
5 ANALYZING ( WITH) PROBABILISTIC MODELS : AUSE -CASE IN TAICHI PERFORMANCE
5.1 Tai ChiMovement Dataset
5.1.1 Tasks and Participants
5.1.2 Movement Capture
5.1.3 Segmentation and Annotation
5.2 Analyzing with Probabilistic Models: A Methodology for HMM-basedMovement Analysis
5.2.1 Tracking temporal Variations
5.2.2 Tracking Dynamic Variations
5.2.3 Discussion
5.3 Evaluating Continuous Gesture Recognition and Alignment
5.3.1 Protocol
5.3.2 EvaluationMetrics
5.3.3 ComparedModels
5.4 Getting the RightModel: ComparingModels for Recognition
5.4.1 Continuous Recognition
5.4.2 Types of Errors
5.4.3 ComparingModels for Continuous Alignment
5.4.4 Gesture Spotting: Models and Strategies
5.5 Getting the Model Right: Analyzing Probabilistic Models’ Parameters
5.5.1 Types of Reference Segmentation
5.5.2 Model Complexity: Number of Hidden States and Gaussian Components
5.5.3 Regularization
5.6 Summary and Contributions
6 PROBABILISTIC MODELS FOR SOUND PARAMETER GENERATION
6.1 GaussianMixture Regression
6.1.1 Representation and Learning
6.1.2 Regression
6.1.3 Number of Components
6.1.4 Regularization
6.1.5 Discussion
6.2 HiddenMarkov Regression
6.2.1 Representation and Learning
6.2.2 Inference and Regression
6.2.3 Example
6.2.4 Number of States and Regularization
6.2.5 Hierarchical HiddenMarkov Regression
6.3 Illustrative Example: Gesture-basedControl of PhysicalModeling Sound Synthesis
6.3.1 Motion Capture and Sound Synthesis
6.3.2 Interaction Flow
6.4 Discussion
6.4.1 Estimating Output Covariances
6.4.2 Strategies for Regression withMultiple Classes
6.5 Summary and Contributions
7 MOVEMENT AND SOUND ANALYSIS US INGHIDDEN MARKOV REGRESSION
7.1 Methodology
7.1.1 Dataset
7.1.2 Time-based Hierarchical HMR
7.1.3 Variance Estimation
7.2 Average Performance and Variations
7.2.1 Consistency and Estimated Variances
7.2.2 Level of Detail
7.3 Synthesizing Personal Variations
7.3.1 Method
7.3.2 Results
7.4 VocalizingMovement
7.4.1 Synthesizing Sound Descriptors Trajectories
7.4.2 Cross-Modal Analysis of VocalizedMovements
7.5 Summary and Contributions
8 PLAYING SOUND TEXTURES
8.1 Background andMotivation
8.1.1 Body Responses to Sound stimuli
8.1.2 Controlling Environmental Sounds
8.2 Overview of the System
8.2.1 GeneralWorkflow
8.2.2 Sound Design
8.2.3 Demonstration
8.2.4 Performance
8.3 Siggraph’14 Installation
8.3.1 Interaction Flow
8.3.2 Movement Capture and Description
8.3.3 Interaction withMultiple Corpora
8.3.4 Textural and Rhythmic Sound Synthesis
8.3.5 Observations and Feedback
8.3.6 Discussion
8.4 Experiment: Continuous Sonification of Hand Motion for Gesture Learning
8.4.1 RelatedWork: LearningGestureswith Visual/Sonic Guides
8.4.2 Method
8.4.3 Findings
8.4.4 Discussion
8.5 Summary and Contributions
9 MOTION- SOUND INTERACTION THROUGH VOCALIZATION
9.1 RelatedWork: Vocalization in Sound andMovement Practice
9.1.1 Vocalization inMovement Practice
9.1.2 Vocalization inMusic and Sound Design
9.1.3 Discussion
9.2 System Overview
9.2.1 Technical Description
9.2.2 Voice Synthesis
9.3 The Imitation Game
9.3.1 Context andMotivation
9.3.2 Overview of the Installation
9.3.3 Method
9.3.4 Findings
9.3.5 Discussion and FutureWork
9.4 Vocalizing DanceMovement for Interactive Sonification of Laban Effort Factors
9.4.1 RelatedWork onMovementQualities for Interaction
9.4.2 Effort in LabanMovement Analysis
9.4.3 Movement Sonification based on Vocalization
9.4.4 EvaluationWorkshop
9.4.5 Results
9.4.6 Discussion and Conclusion
9.5 Discussion
9.6 Summary and Contributions
10 CONCLUSION
10.1 Summary and Contributions
10.2 Limitations and Open Questions
10.3 Perspectives
AAPPENDIX
A.1 Publications
A.2 The XMM Library
A.2.1 Why another HMMLibrary?
A.2.2 FourModels
A.2.3 Architecture
A.2.4 Max/MuBu Integration
A.2.5 Example patches
A.2.6 Future Developments
A.2.7 Other Developments
A.3 Towards Continuous Parametric Synthesis
A.3.1 Granular Synthesis with Transient Preservation
A.3.2 A Hybrid Additive/Granular Synthesizer
A.3.3 Integration in aMapping-by-Demonstration Syste
A.3.4 Qualitative Evaluation
A.3.5 Limitations and Future Developments
BIBLIOGRAPHY




