Utvikling

Forskningsopphold ved IRCAM

IRCAM, Centre Pompidou 20.04.2009 10.0030.04.2009 19.00

Publisert

Tema for oppholdet er integrering av FTM og Jamoma for å kunne arbeide med avansert kontorll av parametre i real-time media systemer.

Prosjektbeskrivelsen er gjengitt nedenfor. Bildet til høyre er fra FTM workshopen som Diemo Schwarz fra IRCAM hadde ved BEK i mars 2009.

 

Advanced real-time control of parameters by integration of Jamoma and FTM in Max.

Introduction.

Real-time technologies in the arts.

The development of real-time technology has opened new possibilities for artistic expression, enabling live generation of and interaction with media. The processing in real-time of media and live input, often combined with possibilities for physical computing (O’Sullivan & Igoe, 2004), has become an integrated part of a variety of contemporary artistic practises such as works for stage, live music performances using new instruments for musical expression, interactive and generative installations and sound art. A major challenge in this kind of works is how to develop control systems that maintain access to a rich set of parameters while remaining manageable in a live performance setting.

Accessing complex sets of parameters in real-time through a structured approach.

Max/MSP/Jitter is one of several programming environments for real time processing of media. According to one of its creators “Max/MSP does not enforce readability, consistency, or efficiency on its users. There are no real standards for interoperability at the level of the patcher…” (Zicarelli, 2002).

Jamoma attempts to address this issue by providing a framework for modular development in Max with a structured API for interfacing with modules (Place & Lossius, 2006). Jamoma modules communicate using the Open Sound Control protocol (Wright, 2002), extended through an object-oriented approach to OSC nodes, conceiving them as having properties and methods (Place, Lossius, Jensenius, Peters, & Baltazar, 2008). The process of assigning additional properties to parameters defining their behaviour increase the abilities for continuous transformation and shaping of the artistic material (Place, Lossius, Jensenius, & Peters, 2008). The OSC namespace implementation in Jamoma also provides possibilities of querying the system for the namespace of available nodes, as well as retrieving information on current values of nodes and node properties, along somewhat similar lines as suggested by Jazzmutant (2007). This way Jamoma partly offers solutions to a fundamental question of how to maintain access to and control of complex sets of parameters and data in real-time systems.

Apart from being used for artistic purposes, Jamoma is also used for researhc and prototyping of protocols for capturing and communication of data streams, e.g. gestural data using GDIF – Gestural Description Interchange Format (Jensenius, Kvifte, & Godøy, 2006), (Nymoen, 2008) and spatial audio information according to SpatDIF – Spatial Sound Description Interchange Format (Peters, 2008).

Controlling complex sets of parameters in real-time environments.

The Jamoma API offers simple access to all parameters of all modules, but relatively few modules so far takes advantage of this for advanced controlling purposes. The exceptions are a text-based cue list system, a number of modules for one-to-one mappings between parameter values, and a series of modules for work on SDIF – Sound Description Interchange Format data (Nymoen, 2008). Development of further solutions for control of modules is ongoing within the French research platform Virage.

FTM is a shared library and a set of modules extending the signal and message data flow paradigm of Max permitting the representation and processing of complex data structures such as matrices, sequences or dictionaries as well as tuples, MIDI events or score elements (Schnell, Schwarz, Bevilacqua, & Muller, 2005). FTM forms the basis for the MnM toolbox, dedicated to mapping between gesture and sound, and more generally to statistical and machine learning methods (Bevilacqua, Müller, & Schnell, 2005), as well as Gabor, a unified framework for a number of advanced audio processing techniques (Schnell & Schwarz, 2005).

Objectives.

FTM and accompanying libraries are developed at IRCAM – Institut de Recherche et Coordination Acoustique/Musique. The objective of the proposed STSM to IRCAM will be to investigate possibilities for advanced control of complex systems for real-time processing of media by integrating the use of Jamoma and FTM libraries in Max:

  • The first goal will be to develop a more firm understanding of how the FTM and MnM libraries work and how they might be used for advanced control of Jamoma modules.
  • The translation of OSC data into FTM-compatible objects will be investigated. Of particular relevance are snapshots of module states and time-based streams of data. Potential FTM-based representations are vectors and matrixes of floating-point values, break point functions, score objects and scores of time-tagged matrixes and vectors. This will enable capturing of instant states of a Jamoma system as well as sequences of events over time.
  • From this methods for mapping of data will be investigated. In particular matrix-based representations of data will be used to investigate linear many-to-many mappings and mappings based on Principle Component Analysis.
  • Sequences of time-tagged recordings of data can be considered objects and further processed, e.g. in order to morph between recorded sets of gestures or for live interaction with predefined or recorded sequences of parameters over time.
  • Finally I hope to achieve a more firm understanding of gesture and score following techniques as implemented in MnM and Suivi, as a basis for future research into how this can be used to control systems of Jamoma modules, e.g. by having gestures or score following algorithms triggering complex states and events.

All solutions developed are to be implemented as Jamoma modules and will be distributed according to a GNU LGPL licence.

Biography.

Bevilacqua, F., Müller, R., & Schnell, N. (2005). MnM: A Max/MSP mapping toolbox. Proceedings of the 2005 Conference on New Interfaces for Musical Expression.

Jazzmutant. (2007). Extension and Enhancement of the OSC Protocol. Draft 25 July. Jazzmutant.
Jensenius, A. R., Kvifte, T., & Godøy, R. I. (2006). Towards a gesture description interchange format. Proceedings of New Interfaces for Musical Expression, NIME 06 (pp. 176–179). Paris: IRCAM – Centre Pompidou, Paris, France.

Nymoen, K. (2008). A setup for synchronizing GDIF data using SDIF-files and FTM for Max. Report on Short Term Scientific Mission. Action: E0601 – Sonic Interaction Design. Musical Gestures Group, Department of Musicology. Oslo: University of Oslo.

O’Sullivan, D., & Igoe, T. (2004). Physical computing. Sensing the physical world with computers. Boston: Thompson Course Technology.

Peters, N. (2008). Proposing SpatDIF – The Spatial Sound Description Interchange Format. Proceedings of the International Computer Music Conference. Belfast: The International Computer Music Association.

Place, T., & Lossius, T. (2006). Jamoma: A modular standard for structuring patches in Max. Proceeding of the International Computer Music Conference 2006. The International Computer Music Association.

Place, T., Lossius, T., Jensenius, A. R., & Peters, N. (2008). Flexible control of composite parameters in Max/MSP. Proceeding of the International Computer Music Conference. The International Computer Music Association.

Place, T., Lossius, T., Jensenius, A. R., Peters, N., & Baltazar, P. (2008). Addressing classes by differentiating values and properties in OSC. Proceeding of the 8th International Conference on new Instruments for Musical Expression.

Schnell, N., & Schwarz, D. (2005). Gabor, multi-representation real-time analysis/synthesis. Proceedings of the 8th International Conference on Digital Audio Effects (DAFx’05). Madrid: Universidad Politécnica de Madrid.

Schnell, N., Schwarz, D., Bevilacqua, & Muller, R. (2005). FTM – complex data structures in Max. Proceedings of the 2005 International Computer Music Conference. The International Computer Music Association.

Schwarz, D., Beller, G., Verbrugghe, B., & Britton, S. (2006). Real-Time Corpus-Based Concatenative Synthesis with CataRT. Proceedings of the 9th Int. Conference on Digital Audio Effects (DAFx-06). Montreal, Canada.

Wright, M. (2002). The Open Sound Control 1.0 Specification. version 1.0. Retrieved 11 30, 2008 from http://opensoundcontrol.org/spec-1_0

Zicarelli, D. (2002). How I learned to love a program that does nothing. Computer Music Journal , 26 (4), pp. 44-51.