Tickets are now on sale for The Cross-Media Forum, 15-18 October! Held in association with the 57th BFI London Film Festival, this is the number one event for anyone working in film, broadcast, mobile, online, advertising, publishing and the arts.
Grab yourself a ticket to The Conference (15 Oct) to hear the latest in multi-platform storytelling, audience engagement strategies and maker culture from keynote speaker Elan Lee, Chief Design Officer at Xbox Entertainment Studios plus media experts such as Nathan Hull, Digital Product Development Director at Penguin Books; Paula Zuccotti, CEO & Founder of The Overworld; Ingrid Kopp Direct of Digital Initiatives at Tribeca Film Institute. Check out the full list of speakers here.
Get the inside word on what the leading financiers and distributors are looking for at The Pixel Pitch (16 Oct) as 8 international teams present their up-and-coming cross-media projects in an attempt to win the prestigious 6,000€ ARTE International Prize for The Pixel Pitch!
Please help us spread the word about this exciting programme by sharing this information with your networks and friends – full details in the marketing copy attached.
Oslo Screen Festival is now open for entries and would like to invite artists to send new video art works to the festival’s 4th edition in March 2014. The festival will take place at the cinemateque in Filmens Hus and other locations in the city. Please read the instructions before going to the online entry form at the bottom of the page.
Deadline is November 1st 2013
Welcome to SI13: NTU/ADM Symposium on Sound and Interactivity, 15-16 November 2013
NTU’s School of Art, Design, Media is hosting a two-day symposium on Sound and Interactivity 15-16 November 2013, sponsored by the CLASS scheme of the College of Humanities, Arts, and Social Sciences. The event will be held at SADM, Singapore.
The event aims to bring together researchers, artists, and scholars working with sound and interactivity in all ways creative. Roger Dean of MARCS, Sydney, and Diemo Schwarz of IRCAM, Paris, are keynote speakers.
Submissions as Papers, Artworks, or Others, can now be made using EasyChair (https://www.easychair.org/conferences/?conf=si13), until 1 October 2013.
Topics of interest include:
- 3D audio
- acoustic ecology
- aesthetics of sound and interactivity
- AI in music
- app design
- audio games
- audiovisual installation
- audiovisual performance
- computer-assisted analysis
- computer-assisted composition
- field recording
- game audio
- haptic interfaces
- interactive art involving sound
- live coding
- music automata
- music emotion
- music information retrieval
- robot musicians
- sound art
- soundscape design
- sound design
- sound in multimedia
- visual music
20 August: 1st call
1 October: deadline for abstract submissions
10 October: acceptance results
10 November: deadline for camera-ready papers (3-6 pages) for inclusion in proceedings
14 November: pre-Symposium concert (TBC)
15-16 November: Symposium
Submissions can be made in either of three categories: Papers, Artworks, Other, using EasyChair (https://www.easychair.org/conferences/?conf=si13), until 1 October 2013. All submissions will be reviewed for scientific and artistic merit by a committee (double-blind, please blank out author names).
- Paper submissions should be 400-500 words (‘extended abstract’).
- Artwork submissions should be 200-300 words in writing and include a clear description of technical requirements. The text may include a weblink to an anonymous media source (e.g. jpeg, mp3, mov) with an excerpt of the artwork.
- Other submissions (e.g. soundwalks, piece+paper, software demo, workshop) should be 200-300 words in writing. To be considered, the scope and requirements must be clearly outlined in the abstract. The text may include a weblink to an anonymous media source (e.g. jpeg, mp3, mov) as appropriate to the submission.
Authors of accepted Papers will be invited to present at the event either orally (25 minutes in plenum), as poster (5 minutes in plenum + poster). For either, we encourage preparation of full papers (3-6 pages, template TBC) for inclusion in the proceedings.
Authors of accepted Artworks will be invited to present live at a pre-Symposium concert on 14 November (TBC). Video projection, stage presentation, and multipoint diffusion are conceivable (but cannot be guaranteed).
The registration fee includes access to proceedings and concert (TBC), as well as lunch, tea, and Symposium Dinner. Those wishing to attend but are not active presenters are asked to contribute a fee partially covering the catering costs.
Presenters: all included, no fee.
Other attendants including passive co-presenters: ~150 SGD (TBC).
Students (not presenting): ~50 SGD (TBC).
(double-blind review process; names of PC members will be released after submission deadline)
Poh Zhuang Yi
The web site and project archive of the now discontinued theatre ensemble Baktruppen is now available here.
BEK and PNEK supported the participation at SMC 2013 for Natasha Barrett. Here’s her report from the conference.
SMC Sound and Music Computing Conference 2013 was held in conjunction with the SMAC Stockholm Music Acoustics Conference 2013, 30 July – 3 August 2013 at KTH Royal Institute of Technology and KMH Royal College of Music, Stockholm, Sweden.
The themes of the conference were “Sound Science, Sound Experience”. The daytime programme consisted of oral paper presentations, keynote speeches and poster presentations. The evening programme focused on concerts.
There were over 350 registered delegates from all over the world.
My own attendance and focus of interested was mainly within the SMC section where I presented my latest ambisonics composition Hidden Values in an evening concert.
This report form the conference focuses on the SMC section.
SMC paper themes were grouped in the following areas:
- Human-machine interaction.
- Sonic interaction design.
- Music information retrieval.
Concerts focused on work in three musical areas:
- Space and spatialisation.
- Video / visuals.
The daytime presentations consisted of many interesting papers. Although I have my personal favourites, here I will just give the link where papers can be read on-line:
http://www.speech.kth.se/smac-smc-2013/programme.html. I can draw readers’ attention to the inspiring invited keynotes speakers.
The hall was setup with the intention of providing capacity for stereo sound diffusion, multichannel, video works, live electronics and ambisonics. The organisers wished to provide elevated loudspeakers and did the best with what was possible, in this case loudspeakers at ear height for the lower array and on high stands for the upper array. There were 24 loudspeakers available.
My own composition ‘Hidden Values’ was performed in the concert on the 31st July. The work contains three parts and in this concert parts 2 and 3 were performed (part 1 currently requires a Wavefield Synthesis (WFS) array which was outside the conference specification).
‘Hidden Values’ programme notes
Ancient and seemingly minor inventions continue to affect our everyday in a multitude of ways, yet the utility of these simple devices go unnoticed. ‘Hidden Values’ takes a moment to pause and explore directly, dramatically and through metaphor, three of these inventions: the umbrella, the lock (and key) and sight correction. The work was composed at IRCAM during a music research residency exploring advanced sound spatialisation techniques in composition. Special thanks to soprano Evdokija Danajloska and percussionist Gilles Durot for their collaboration in the sound materials used in the composition of this work. The research residency was funded by IRCAM, The Oslo City Cultural Grant for International collaboration, and the Norwegian Cultural Council. ‘Hidden Value’ was composed at IRCAM with support from the Norwegian Composers’ Fund. The work was composed in 7th order 3D ambisonics and also exists in a number of other spatial formats.
The Lock (part II)
The invention of the lock and key can be traced back over 4000 years. The Lock plays out a drama between two forces: one represented by the female voice, the other by percussion instruments.
Optical Tubes (part III)
Optical Tubes, apparently invented by Descartes, were glass tubes that touched the eyeball like contact lenses, but with the unfortunate side effect that you could not blink! In Optical Tubes, imagining how it would have been for objects to only appear in focus as you moved towards or away from them is a central musical idea.
Concert loudspeaker arrangement.
Out of the 24 loudspeakers available, 20 were suitable for ambisonics. These 20 consisted of eight L’acoustic 112p main loudspeakers at ear height, eight Genelec 1031 elevated on stands directly above the L’acoustic, two Genelec 8040 on the balcony and two under the balcony. The hall had fixed seating that slopped down to an elevated stage (figures 1 and 2). Although the organisers realised that the main eight were a little too low, they pointed out that this was a necessary compromise to enable a difference in height to the upper loudspeakers. Apart from the two speakers on the balcony, the elevated loudspeakers were only two meters above the lower loudspeakers. There were an additional four loudspeakers on stage – two close together on the floor and two distant behind a projection screen – but these were not suitable for ambisonics. They were however used in interesting ways by the performers diffusing stereo sources. (KMH are under construction of a new building that will, amongst other plans, house a purpose built spatial-audio concert room).
Measurement of loudspeaker positions and decisions on the decoding solution.
For ambisonics decoding it is necessary to measure loudspeaker positions and input these co-ordinate into the decoder. Angle and distance to the lower loudspeakers was measured from what would be the audience sweet spot behind the mixer. The slope of the hall was cancelled out, assuming that the lower eight speakers were on a horizontal plane. Distance and azimuth to the first eight speakers were measured with a Bosch PLR 50 attached to a home made rotational plate. By tilting the laser upwards, the distances to the elevated loudspeakers were also obtained. Elevation angles were calculated from the elevated the horizontal distances. With the lower eight loudspeakers on a horizontal plane the speaker co-ordinates were as in figure 3.
When these co-ordinates are visualised, we see the representation in figure 4 and 5 (screenshots from IRCAM’s spat.viewer). Note the depth of the frontal array and the sweet-spot located closer to the rear of the hall. Although the audience is centred in left-right symmetry they are much further to the rear than centre. Ideally the sweet-spot (and measurement origin) would be located further forward in the centre.
My original intention was to decode a 4th-order spherical 3D sound-field using IRCAM’s Spat~. To do this, the co-ordinates of a complete spherical loudspeaker array are inputted to the decoder. The calculated signals for the ‘phantom loudspeakers’ below the equator are ignored, resulting in a hemispherical array of loudspeaker feeds. If there is significant energy in the southern hemisphere, the first ring of phantom loudspeaker signals can alternatively be folded back into the lower ring of real loudspeakers. This also serves to weight the image lower in height.
However, the elevation angle of the upper array was insufficient for such a 3D decoding. Further complications for ambisonics are the different power, frequency and angular distributions of each loudspeaker type. An analogue mixer compounds calibration difficulties. The speaker differences first needed balancing by ear. If there had been more time the spat.gaincalibration~ object could be used to automatically calculate the difference in speaker gains. Under the given circumstances this was not possible. Each work was given 45 minutes of rehearsal time, including setting up, so there would be no time to experiment or fine-tune the decoding. I therefore opted to use Harpex decoding 1st-order spherical 3D B-format using the measured speaker angles and elevations. Spat.align~ was then used to compensate for loudspeaker distance differences (calculating the time and amplitude alignment for loudspeakers).
Despite these challenges the spatial reproduction was successful and I received much positive feedback after the event. ‘Hidden Values’ was composed and premiered at IRCAM in the Espace de Projection – an accurately installed, fixed 75-loudspeaker ambisonics system (and in addition four walls of WFS). There are inevitable differences in spatial reproduction over smaller or less accurate concert speaker set-ups. Here is a summary of my experience comparing the 75-loudspeaker ambisonics hemisphere with the 20-loudspeaker multi-purpose SMC concert array:
- Sound points were less focused, the static location more of a blur than a point, but the angular information was maintained (sometimes blurs are useful!).
- Slow / gradual spatial motions were vague, but fast gesture motions were clear. This means that the audible motion of slowly expanding from a ‘point’ to an ‘envelopment’ was subtle, while gestural fast dynamics were maintained. In one way this makes a poorer interpretation, yet can also be regarded as a different interpretation of the music.
- Imaging for me appeared less precise, but the audience commented that it sounded clear and detailed. Maybe as composers we can be over critical to our own work!
- The sound scene was denser / less transparent. However, this is likely a combination of many factors: not only the lower order decoding, but also due to loudspeaker characteristics and room acoustics.
Despite these differences, the overall impression of the composition’s spatial content was preserved. The listener area was quite large, likely due to Harpex (a standard 1st order decoding gives a small listening area). Harpex may also have contributed to the clear angular information that was surprisingly good under the given circumstances.
For information about Harpex: http://harpex.net
For information about Spat: http://forumnet.ircam.fr/wp-content/uploads/2012/10/Spat4-UserManual.pdf
Observations on other spatial works
Without describing the musical aspects of each work it is interesting to briefly discuss how different spatial formats played over the loudspeaker array. Many composers played 8- or 16-channel versions of their compositions, decided on a channel routing and made no live intervention during performance. The quality of the works was high, but after hearing many of these works there were apparent spatial trends, where gestures and spatial images appeared similar across works. It may be that the compositions themselves were of diverse spatial play but that the allocation of a fixed spatial format resulted in a spatial constraint in concert. Without hearing the sources in a neutral context it is difficult to know. I can however draw on the work by Annette Vande Gorne, which although was in fixed multichannel format, she actively performed the spatial diffusion from this source during the concert and the result was a rich and diverse experience. Further, the few stereo works that were played with traditional sound-diffusion performance technique also overcame the rigidity of the fixed multi-channel format.