Catégorie de document |
Contribution à un colloque ou à un congrès |
Titre |
A Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis |
Auteur principal |
Jules Françoise |
Co-auteurs |
Norbert Schnell, Frédéric Bevilacqua |
Colloque / congrès |
Proceedings of the 21st ACM international conference on Multimedia (MM'13). Barcelona : Octobre 2013 |
Comité de lecture |
Oui |
Copyright |
Copyright is held by the owner/author(s). Publicat |
Année |
2013 |
Statut éditorial |
Non publié |
Résumé |
In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture-sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound. |
Mots-clés |
gesture / hmm / music / music performance / sound synthesis |
Equipe |
Interactions musicales temps-réel |
Cote |
Francoise13b |
Adresse de la version en ligne |
http://architexte.ircam.fr/textes/Francoise13b/index.pdf |
|