Résumé |
We describe a multi-agent architecture for an improvisation oriented musician-machine interaction system that learns in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The system is capable of processing real-time audio/video as well as Midi.the OpenMusic one, which has a deeper analysis/prediction span over the past and the future. These two conceptions of time interact and synchronize over communication channels through which musical data as well as control signals circulate in both direction. A decisive advantage we have found in this hybrid environment experience is its double folded extendability. In the OM domain, it is easy, even while the system is running, to change the body of a lisp function and test incremental changes. |