Ircam-Centre Pompidou

Recherche

  • Recherche simple
  • Recherche avancée

    Panier électronique

    Votre panier ne contient aucune notice

    Connexion à la base

  • Identification
    (Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)

    Entrepôt OAI-PMH

  • Soumettre une requête

    Consulter la notice détailléeConsulter la notice détaillée
    Version complète en ligneVersion complète en ligne
    Version complète en ligne accessible uniquement depuis l'IrcamVersion complète en ligne accessible uniquement depuis l'Ircam
    Ajouter la notice au panierAjouter la notice au panier
    Retirer la notice du panierRetirer la notice du panier

  • English version
    (full translation not yet available)
  • Liste complète des articles

  • Consultation des notices


    Vue détaillée Vue Refer Vue Labintel Vue BibTeX  

    Catégorie de document Contribution à un colloque ou à un congrès
    Titre Tools for the Writing of Tamonontamo (2012): A New Way to Relate Concatenative Synthesis and Spatialization
    Auteur principal Maurilio Cacciatore
    Co-auteur Diemo Schwarz
    Colloque / congrès Korean Electro-Acoustic Music Society's Annual Conference (KEAMSAC). Seoul : Novembre 2013
    Comité de lecture Oui
    Année 2013
    Statut éditorial Publié
    Résumé

    Tamonontamo (2012) is a piece for amplified vocal quartet, choir of 24 singers and live electronics. The focus of this article is about the work made in collaboration with the Real-Time Music Interaction Team of Ircam. Augustin Muller, computer music programmer working with me at Ircam for this project, upgraded the CataRT software adding a Spat~ module to relate the corpus-based synthesis with the spatialization. In most of works the spatialization of sounds depends from the aesthetic choose of the composer. The Spat~ software allows the drawing in the space of linear movements of a source among a pre-selected number of loudspeakers with a user-friendly graphical interface. In this patch, we used Spat to spatialize the sounds in a non-linear way. The logic of spatialization depends on the pair of audio descriptors chosen in the set-up of the CataRT graphical display. In this sense, the aesthetic ideas of the composer refer not locally to each sound but on the choosing of the audio descriptors used for the x, y axis. A further Spat-related display has been implemented to split the space of analysis/playing from the space of diffusion of sounds. The implementation of the Unispring algorithm in the mnm.distribute external Max/MSP object allows the distribution of grains rescaling their position inside a pre-drawn sub-space. The interpolation between different shapes or the change of audio descriptors on the x, y axis for both displays can be programmed and made in real time. The storage of the synthesis as database (in a text file), allows the possibility to recall the analysis and recover the shapes drawn previously. Hiding and permanent deletion of some selected corpora in the database is also possible. The use of colour to work on the z axis is possible and is among the next step of this work. The upgrade of the use of Spat~ to an ambisonics system for diffusion of sounds represents the possibility to make—through the concatenative synthesis—a genuine work of 3D-sound-sculpting.

    Mots-clés corpus-based synthesis / spatialization / audio descriptors
    Equipes Autre (hors R&D), Interactions musicales temps-réel
    Cote Cacciatore13a

    © Ircam - Centre Pompidou 2005.