Jamoma 0.5.2 released
Published
Features include:
- A large and peer-reviewed library of modules for audio and video processing, sensor integration, cue management, mapping, and exchange of data with other environments
- Extensive set of abstractions that facilitates development and documentation of Max/MSP projects
- Specialized sets of modules for work on spatial sound rendering, including support for advanced spatialization techniques such as Ambisonics, DBAP, VBAP, and ViMiC.
- Modules for work on music-related movement analysis
- Powerful underlying control structures that handle communication across modules
- Strong emphasis on interoperability
- Native OSC support, thus making it easy to access and manipulate processes via external devices and interfaces
- Comprehensive documentation through maxhelp-files, reference pages and growing number of online tutorials
- Easily extendable and customizable
Jamoma is an open-source project for the development of audio/video applications, plugins and Max/MSP-like environments. It offers many C++ frameworks for structured programming and is based on modular principles that allow the reuse of functionalities where all parameters remain customizable to specific needs.
Jamoma is in development for more than five years and is used for teaching and research within science and the arts. It has provided a performance framework for composition, audio/visual performances, theater and installation gallery settings. It has been also used for scientific research in the fields of psychoacoustics, music perception and cognition, machine learning, human computer interaction and medical research (more info here ).
Jamoma is distributed under a BSD license and the sources can be freely downloaded at http://github.com/jamoma. Development is currently supported by BEK – Bergen Center for Electronic Arts, 74 Objects, Electrotap, GMEA – Centre National de Creation Musicale d’Albi-Tarn and University of Oslo. Further details can be found here.