Re: [Sursound] WFS systems

2014-05-23 Thread Markus Noisternig
Dear Sursounders, 

For those of you who are interested in some more details about the IRCAM array:

The array is installed in IRCAM's variable acoustics concert hall (15.5 x 24 x 
10.5 m3). It consists of four horizontal linear arrays (with a total of 280 
independently controlled coaxial speakers) that is complemented by a 3D 
rectangular array (with a total of 59 independently controlled coaxial 
speakers), and 8 subwoofers.

Horizontal array:
- front array: 88 speakers, 16 cm spacing;
- side arrays: 64 speakers, 29 cm spacing;
- back array: 64 speakers, 16 cm spacing;

The front and back arrays can be used as mobile arrays for concerts (rigging 
structure + flight cases).

Real-time audio rendering is achieved by parallel processing on a small 
computer cluster. The 5 computers are connected to a 512x512 MADI matrix for 
routing the output channels to the speakers. The real-time audio processing 
software (ircam spat~) provides several sound spatialization methods, e.g. WFS, 
(NFC)-HOA, VBAP, etc. … The standard configuration uses WFS panning for the 4 
horizontal arrays and up to 9th order HOA for the rectangular array.

Best regards, 

Markus


On 22 mai 2014, at 01:21, Augustine Leudar  wrote:

> oh and IRCAM - IRCAM have a really good one I hear (and one day hope
> to actually hear)
> 
> On 21/05/2014, Augustine Leudar  wrote:
>> P.S.  I think Disney world/land have one in the haunted house...
>> 
>> On 21/05/2014, Augustine Leudar  wrote:
>>> I have on here in Ireland - a humble 32 channel one - though I often
>>> put the speakers in many different configurations and it is without a
>>> name. Also te University of Salford has a good one as do the guys who
>>> make soundscape renderer (I think)
>>> 
>>> On 18/05/2014, Andres Cabrera  wrote:
 Hi,
 
 I'm wondering if anyone has compiled a list of research WFS systems. The
 TU
 Berlin and the Game of Life systems come immediately to mind, but before
 starting to dig, I wanted to know if someone has already done this list.
 
 Also of interest could be companies working on commercial WFS systems.
 
 Thanks!
 Andrés
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
 
>>> 
>>> 
>>> --
>>> 07812675974
>>> 
>> 
>> 
>> --
>> 07812675974
>> 
> 
> 
> -- 
> 07812675974
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] WFS systems

2014-05-25 Thread Markus Noisternig
Hi Lasse and Augustine,

> I am eagerly awaiting Ircam to release their WFS software - but they haven't 
> as far as I know

The WFS real-time renderer is not yet released as we first have to write some 
GUI objects for the filter computation tools.

The current release of the Max/Msp external library (see http://forum.ircam.fr) 
includes:
- 2D/3D panning algorithms (direct panning, VBAP, etc.);
- 2D/3D-HOA (mode matching, energy preserving, max re/rv, nfc-hoa, etc.) up to 
orders N = 80 (which should be fairly enough for the next few years);
- Binaural and transaural rendering; 
- RIR measurement and analysis tools (exponential sweep sines, deconvolution, 
room acoustic parameter estimation, etc.);
- FDN-based reverberation and an efficient low-latency multichannel convolution;
- ...

We'll hopefully soon add spherical microphone recording and processing (beam 
forming, etc.) externals, which are currently under test. 

Please note that Spat is NOT free and open source.

>> Have you published anything on how to interact with this kind of system, 
>> developed software that makes it "easy" to make musicians improvise with the 
>> spatialization also.. some sort of 3D environment or the likes? If not you - 
>> have you seen any cool ways / thoughts of this?

Spat provides a perceptual control over sound spatialization, which makes it 
easier for composers to interact with the system. Together with Ircam's Music 
Reproduction Research Group we are working towards more advanced tools for 
computer aided composition and spatial sound, such as
- OMPrisma (http://www.idmil.org/software/omprisma): Marlon Schumacher's  
library for spatial sound synthesis with Open Music 
(http://repmus.ircam.fr/openmusic/home);
- EFFICACE (http://repmus.ircam.fr/efficace/), a research project funded by the 
French National Research Agency which aims at integrating spatial audio 
rendering to Open Music.

With cheers from Paris, 

Markus
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] WFS systems

2014-06-03 Thread Markus Noisternig
Hi Lasse, 

We haven't been testing the MIRA app so far, but many of IRCAM's computer 
musicians are using the Lemur interface or similar OSC-based devices for 
controlling Spat.  We cannot promise to soon provide more advanced GUI objects 
as our developments are mainly focused on real-time audio processing algorithms 
for sound spatialization and reverberation rendering.

However, it should be easy to set-up any external controller device for 
developing your own user interfaces (e.g. using the OSC protocol). Spat exports 
most of the internal control parameters as messages in Max/Msp and all main 
functional units are also available as external objects. You can, e.g., use the 
spat.spat~ object for HOA rendering with FDN reverberation and configure it 
with attributes and messages or, if you feel for patching, you can re-program 
the entire processing chain using external objects for low-level processing and 
then replace parts of the patch with your own algorithms. The Spat package 
contains a tutorial on "patching Spat".

If you are using OSC devices you may want to slow down the message flow to save 
cpu power for the audio processing. This can be easily done using the 
spat.speedlim object.

All the best,

Markus



On 2 juin 2014, at 16:59, Lasse Munk  wrote:

> Markus:
> 
> When you write GUI objects for wfs / spat in general, is it possible you can 
> have the iPad app MIRA in mind? It's  a very nice and easy way of interfacing 
> with max, and would be a great extension of spatializing sound with what-ever 
> engine!
> 
> All the best,
> Lasse
> 
>> Augustine Leudar <mailto:gustar...@gmail.com>
>> 31 mai 2014 00:05
>> very fond of spat - that reverb you´ve got (I think it was spat) where you
>> can have revolving sources etc is unreal.
>> 
>> 
>> 
>> 
>> 
>> Lasse Munk <mailto:lassemunkm...@gmail.com>
>> 25 mai 2014 20:46
>> Hi Markus,
>> 
>> Thank you for your answer - like Augustine I am also eagerly waiting the WFS 
>> release! :)
>> 
>> Thank you for the links to the open music, OMPrisma etc. I was not aware of 
>> these, and thank you for the development of the ircam tools, very cool 
>> indeed! :)
>> 
>> 
>> 
>> 
>> Markus Noisternig <mailto:markus.noister...@ircam.fr>
>> 25 mai 2014 17:35
>> Hi Lasse and Augustine,
>> 
>>> I am eagerly awaiting Ircam to release their WFS software - but they 
>>> haven't as far as I know
>> 
>> The WFS real-time renderer is not yet released as we first have to write 
>> some GUI objects for the filter computation tools.
>> 
>> The current release of the Max/Msp external library (see 
>> http://forum.ircam.fr) includes:
>> - 2D/3D panning algorithms (direct panning, VBAP, etc.);
>> - 2D/3D-HOA (mode matching, energy preserving, max re/rv, nfc-hoa, etc.) up 
>> to orders N = 80 (which should be fairly enough for the next few years);
>> - Binaural and transaural rendering;
>> - RIR measurement and analysis tools (exponential sweep sines, 
>> deconvolution, room acoustic parameter estimation, etc.);
>> - FDN-based reverberation and an efficient low-latency multichannel 
>> convolution;
>> - ...
>> 
>> We'll hopefully soon add spherical microphone recording and processing (beam 
>> forming, etc.) externals, which are currently under test.
>> 
>> Please note that Spat is NOT free and open source.
>> 
>>>> Have you published anything on how to interact with this kind of system, 
>>>> developed software that makes it "easy" to make musicians improvise with 
>>>> the spatialization also.. some sort of 3D environment or the likes? If not 
>>>> you - have you seen any cool ways / thoughts of this?
>> 
>> Spat provides a perceptual control over sound spatialization, which makes it 
>> easier for composers to interact with the system. Together with Ircam's 
>> Music Reproduction Research Group we are working towards more advanced tools 
>> for computer aided composition and spatial sound, such as
>> - OMPrisma (http://www.idmil.org/software/omprisma): Marlon Schumacher's  
>> library for spatial sound synthesis with Open Music 
>> (http://repmus.ircam.fr/openmusic/home);
>> - EFFICACE (http://repmus.ircam.fr/efficace/), a research project funded by 
>> the French National Research Agency which aims at integrating spatial audio 
>> rendering to Open Music.
>> 
>> With cheers from Paris,
>> 
>> Markus
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu

Re: [Sursound] And now for something different...

2014-06-23 Thread Markus Noisternig
Dear Richard and Sursounders, 

The AES-X212 "Spatial acoustic data file format" standardizes a file format to 
exchange HRTF data. The format is designed to be extendable to represent any 
space-related data, such as spatial room impulse responses (SRIR) measured with 
multichannel microphone and loudspeaker arrays. It builds upon the spatially 
oriented format for acoustics (SOFA) and has the filename extension "sofa".

The AES-X212 Task Group Draft got approved by the AES-SC earlier this year. I 
am currently working on the final edits and we should get the standard 
published within the next few weeks.

An application-programming interface (API) with similar calls for various 
programming languages (Matlab, Octave, C++) and for different computer 
platforms is available online at http://www.sofaconventions.org as well as on 
http://sourceforge.net/projects/sofacoustics. The API provides functionality to 
create, read, and write SOFA files. 

The SOFA website already hosts source materials from different HRTF databases. 
Please note that these data are encoded using a beta version of the AES-X212 
format.

Best Regards, 

Markus


On 23 juin 2014, at 19:47, Richard Dobson  
wrote:

> I think the AES already has a project to define a file format for htrfs; when 
> I get home I can find the project code. 
> 
> Richard Dobson
> 
> 
> Sent from my iPhone
> 
> On 23 Jun 2014, at 17:43, Martin Leese  
> wrote:
> 
>> Bo-Erik Sandholm wrote:
>> 
>>> Is there a way to get a personalized HRTF (or even one near mine) with out
>>> spending many hundreds of the coins of your choice or travelling to a
>>> distant destination?
>> 
>> No but, if the Microsoft stuff works out, there
>> might be.
>> 
>>> Is there a "standard format" for HRTFS that can be used in several softwares
>>> or even converted?
>> 
>> The answer is, again, no.  However, to state
>> the obvious, if HRTFs are going to fly then
>> there needs to be.  Is this a task for the AES
>> and/or the EBU?
>> 
>> To continue stating the obvious, most
>> audio-only listening currently takes place using
>> ear-buds plugged into players or phones.  This
>> doesn't look like it is going to change anytime
>> soon.  Binaural with personalized HRTFs would
>> improve this listening experience.
>> 
>> Regards,
>> Martin
>> -- 
>> Martin J Leese
>> E-mail: martin.leese  stanfordalumni.org
>> Web: http://members.tripod.com/martin_leese/
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
>> account or options, view archives and so on.
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.
> 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Call for submission : 1st Web Audio Conference (Ircam and Mozilla)

2014-07-03 Thread Markus Noisternig
Dear Colleagues,


Please find here a call for submissions for the 1st Web Audio Conference - 
Ircam and Mozilla, Paris, France in January 26th-28th 2015.
http://wac.ircam.fr

Feel free to distribute this call to interested colleagues.




1st Web Audio Conference - Ircam and Mozilla Paris, France in January 26th-28th 
2015


WAC is the first international conference on web audio technologies and 
applications.

The conference welcomes web R&D developers, audio processing scientists, 
application designers and people involved in web standards.

The conference addresses research, development, design, and standards concerned 
with emerging audio-related web technologies such as Web Audio API, Web RTC, 
Web Sockets, and Javascript.


Contributions to the first edition of WAC are encouraged in, but not limited 
to, the following topics:

   - Innovative audio and music based web applications (with social and user 
experience aspects)
   - Client-side audio processing (real-time or non real-time)
   - Audio data and metadata formats and network delivery
   - Server-side audio processing and client access
   - Client-side audio engine and rendering
   - Frameworks for audio manipulation
   - Web Audio API design and implementation
   - Client-side audio visualization
   - Multimedia integration
   - Web standards and use of standards within audio based web projects
   - Hardware, tangible interface and use of Web Audio API


Call for submissions

   - Technical papers - 2 to 8 pages
   - Demo / Poster
   - Web Audio Gig - involving usage of the Web Audio API and "audience devices 
participation"


October 10, 2014: Deadline for submission - 
https://www.easychair.org/conferences/?conf=wac15

Please refer to the WAC website for additional information: http://wac.ircam.fr




-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Fwd: 1st Web Audio Conference - Ircam and Mozilla - Call for submissions

2014-09-18 Thread Markus Noisternig
Apologies for cross-postings!


Dear all,


Here is a reminder for the 1st Web Audio Conference - http://wac.ircam.fr
* Guideline for submission: http://wac.ircam.fr/guideline.html
* Deadline for submission on October 10 
Feel free to distribute this call


1st Web Audio Conference - Ircam and Mozilla Paris, France in January 
26th-28th 2015


WAC is the first international conference on web audio technologies and 
applications.

The conference welcomes web R&D developers, audio processing scientists, 
application designers and people involved in web standards.

The conference addresses research, development, design, and standards 
concerned with emerging audio-related web technologies such as Web Audio 
API, Web RTC, Web Sockets, and Javascript.


Contributions to the first edition of WAC are encouraged in, but not 
limited to, the following topics:

- Innovative audio and music based web applications (with social 
and user experience aspects)
- Client-side audio processing (real-time or non real-time)
- Audio data and metadata formats and network delivery
- Server-side audio processing and client access
- Client-side audio engine and rendering
- Frameworks for audio manipulation
- Web Audio API design and implementation
- Client-side audio visualization
- Multimedia integration
- Web standards and use of standards within audio based web projects
- Hardware, tangible interface and use of Web Audio API


Call for submissions

- Technical papers - 2 to 8 pages
- Demo / Poster
- Web Audio Gig - involving usage of the Web Audio API and 
"audience devices participation"


October 10, 2014: Deadline for submission - 
https://www.easychair.org/conferences/?conf=wac15

Please refer to the WAC website for additional information: 
http://wac.ircam.fr




Regards,


-- 
Samuel Goldszmidt
Analyse des pratiques musicales
& Centre de Ressources Ircam
IRCAM
1, place I. Stravinsky
F-75004 Paris


  

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] AES69-2015 standard for file exchange - Spatial acoustic data file format

2015-03-15 Thread Markus Noisternig
Dear Sursounders, 

We are pleased to announce the recent publication of the AES69-2015 standard 
for file exchange - Spatial acoustic data file format. See also the AES press 
release at http://www.aes.org/press/?ID=293 <http://www.aes.org/press/?ID=293>

The new AES69-2015 standard defines a file format to exchange space-related 
acoustic data in various forms. These include HRTF, as well as directional room 
impulse responses (DRIR). The format is designed to be scalable to match the 
available rendering process and to be sufficiently flexible to include source 
materials from different databases.

This project was developed in AES Standards Working Group SC-02-08 and 
standardizes the Spatially-oriented format for acoustics (SOFA), which aims at 
storing and transmitting any transfer-function data measured with microphone 
arrays and loudspeaker arrays. See http://www.sofaconventions.org/ 
<http://www.sofaconventions.org/> for further information and ongoing format 
discussions.

Open source application-programming interfaces (API) for Matlab, Octave, and 
C++ are available online at http://sourceforge.net/projects/sofacoustics/ 
<http://sourceforge.net/projects/sofacoustics/>

All the best, 

Markus and Piotr


--
Markus Noisternig
Acoustics and Cognition Research Group
IRCAM, CNRS, Sorbonne Universities, UPMC
Paris, France

Piotr Majdak
Psychoacoustics and Experimental Audiology
Acoustics Research Institute
Austrian Academy of Sciences
Vienna, Austria
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20150315/e3764123/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES69-2015 standard for file exchange - Spatial acoustic data file format

2015-03-15 Thread Markus Noisternig
Dear Marc, 

Spatial acoustic data is typically described by spatial acoustic transfer 
functions. AES69 describes a format for storing spatial acoustic data with a 
focus on interchangeability and extensibility and provides a basis for a wider 
generalized interchange of space-related audio data. It allows to store, e.g., 
room impulse responses measured with microphone arrays and loudspeaker arrays. 

AES69/SOFA is thus not limited to HRTF/HRIR data. There is no preference on 
binaural listening, but it all started from a discussion on sharing HRTF data 
in between research labs.


What is the difference between AES69 and SOFA: 

AES69 builds upon SOFA. It consists of “general specifications” and 
“conventions”. Conventions define recommendations on the naming of AES69 
attributes, variables, and dimensions for discipline-specific data structures, 
that is particular measurement setups. In other standards a set of 
“conventions" is often referred to as a “profile”. New conventions are 
discussed on the SOFA website. When a new convention is considered to be stable 
enough, it will be added to the AES69 standard through the normal revision 
process.

The current version of AES69 standardizes 
— The frequency domain representation of head-related transfer functions 
(HRTFs);
— The time domain representation of HRTFs, that is head-related impulse 
responses (HRIRs); and 
— The time domain representation of HRTFs measured in reverberant spaces, that 
is binaural room impulse responses (BRIRs). 

We hope that we soon can add the following conventions to the standard: 
— The Quadrature Mirror Filter (QMF) domain representation of free-field HRTFs, 
that is the set of QMF parameters; or, even more generally, 
— The time domain representation of spatiotemporal room impulse responses, that 
is directional room impulse responses (DRIRs); and
— The modal representation of the 3-D wave field, that is the spherical 
harmonics coefficients of the incoming/outgoing wave field.

In our research lab, we are currently using an alpha version of a SOFA 
convention for storing MIMO room impulse responses that were measured with a 
32-channel spherical loudspeaker array and a 64-channel spherical microphone 
array. These data can be then represented in various formats, such as the room 
impulse responses for each transmission channel (i.e. the transfer paths 
between each loudspeaker and each microphone) or the spherical harmonics 
coefficients of emitted and received wave fields, respectively.

Please note that AES69 is a “spatial acoustic” and not a “spatial audio” 
format. You could of course use AES69/SOFA to store multichannel audio data, 
but I would recommend using more convenient multichannel audio data file 
formats such us broadcast WAV.

I hope that this answers your questions?

Kind regards, 

Markus




> On 16 mars 2015, at 00:16, Marc Lavallée  wrote:
> 
> Hi Mr. Noisternig.
> 
> At first glance, this data file format is about binaural measurement
> and rendering, and not about multi-channel based auditory scene
> representation. I don't have access to the AES library, but the SOFA
> specs are freely available, and I suppose the standard is based on the
> specs.
> 
> Even as a hobbyist, I can understand its usefulness, but why is it
> named "spatial acoustic", without a reference to binaural listening? It
> is for marketing reasons, based on the predominance of headphone
> listening?
> 
> --
> Marc
> 
> On Sun, 15 Mar 2015 16:13:47 +0100, Markus Noisternig wrote:
>> Dear Sursounders, 
>> 
>> We are pleased to announce the recent publication of the AES69-2015
>> standard for file exchange - Spatial acoustic data file format. See
>> also the AES press release at http://www.aes.org/press/?ID=293
>> <http://www.aes.org/press/?ID=293>
>> 
>> The new AES69-2015 standard defines a file format to exchange
>> space-related acoustic data in various forms. These include HRTF, as
>> well as directional room impulse responses (DRIR). The format is
>> designed to be scalable to match the available rendering process and
>> to be sufficiently flexible to include source materials from
>> different databases.
>> 
>> This project was developed in AES Standards Working Group SC-02-08
>> and standardizes the Spatially-oriented format for acoustics (SOFA),
>> which aims at storing and transmitting any transfer-function data
>> measured with microphone arrays and loudspeaker arrays. See
>> http://www.sofaconventions.org/ <http://www.sofaconventions.org/> for
>> further information and ongoing format discussions.
>> 
>> Open source application-programming interfaces (API) for Matlab,
>> Octave, and C++ are available online at
>> http://sourceforge.net/projects/sofacoustics/
>> <http://s

[Sursound] CFP inSONIC2015 - aesthetics of spatial audio in sound, music and sound-art

2015-04-19 Thread Markus Noisternig
Apologies for cross-postings.

+++

inSONIC2015 - aesthetics of spatial audio in sound, music and sound-art 

November 27 - November 28, 2015

Center for Art and Media (ZKM)
University of Media, Art and Design (HfG)
Karlsruhe, Germany

web: www.insonic2015.org <http://www.insonic2015.org/>

Call for Participation / Call for Works
Submission deadline: June 15, 2015
www.insonic2015.org/submissions <http://www.insonic2015.org/submissions>



INTRODUCTION

inSONIC2015 - Conference - Symposium - Workshops - Concerts - Performances - 
Installations

inSONIC2015 aims to create a platform for bringing together composers, 
researchers, artists as well as teachers and students who are interested in 
spatial audio, sound art and multichannel approaches in media.

The main focus of inSONIC2015 is a critical reflection on aesthetic concepts of 
spatial audio in sound, music and sound-art.

inSONIC2015 is hosted by ZKM and HfG and is part of the Globale 2015.

There will be two conference and symposium
 days combined with concerts and 
installations.
The main event is enhanced by a program of workshops, demonstrations and 
tutorials taking place in the preceding days.

inSONIC2015 is organized in the framework of the BWS plus-Project „Studynetwork 
Space-Media-Sound“. 
BWS plus is part of the Baden-Württemberg-STIPENDIUM for university students, a 
program of the Baden-Württemberg Stiftung.  
<http://www.bwstiftung.de/startseite/>

CALL FOR PARTICIPATION / CALL FOR WORKS
=

inSONIC015 calls for academic and non academic contributrions and works which 
will be reviewed by panels.
We invite submission in the following categories:

Papers and Posters
Workshops and Tutorials/Demonstrations
Compositions and Performances
Installations

+CALL FOR PAPERS AND POSTERS
Written contributions should focus on the aesthetics and musical aspects of:

sounding objects in space
motion and gesture in space
application of novel spatialsation techniques for composition music and 
improvisation
interaction and control of sound and media in space
benefits of spatial audio for electronic and acoustic sound and music
developments in space simulating techniques such as WFS, HOA and combined 
systems
non conventional speakers setups and diffusion techniques
Templates for paper and poster proposals wil be available on the submission 
site.


+CALL FOR COMPOSITIONS AND PERFORMANCES
Contributions in music and sound should have a focus on:

multichannel compositions
music for a spherical speaker setup (Klangdome 43 speakers)
music for a linear Wave Field Synthesis system (128 speakers)
music for a mixed installation setup (The Morning Line)
mixed media works with audio requirements according to the above system
Artists of selected works will be invited for oral presentation or round table 
discussions. Please indicate during submission your interest in aural 
presentation of your work.


+CALL FOR INSTALLATIONS

installations for multichannel setups of audio and/or video

SUBMISSION / REGISTRATION
www.insonic2015.org/submissions <http://www.insonic2015.org/submissions>


IMPORTANT DATES

+SCIENTIFIC PROGRAM
  Papers (4 to 8 pages), Workshops/Demonstrations:

  Submissions due: June 15, 2015
  Review Notification: August 17, 2015
  Camera-ready paper deadline: October 12, 2015


+ARTISTIC PROGRAM
  Compositions, Performances, Installations, Workshops/Demonstrations

  Submissions due: June 15, 2015
  Review Notification: August 17, 2015


PROGRAM

inSONIC2015 will take place in the week from November 23 to November 29 at

ZKM Center for Art and Media // www.zkm.de <http://www.zkm.de/>
HfG University of Media, Art and Design // www.hfg-karlsruhe.de 
<http://www.hfg-karlsruhe.de/>
in Karsruhe, Germany.

Conference and Concert Days will be November 27 and November 28, 2015.
Dates and details of specific events will be announced on the website
www.insonic2015.org/program <http://www.insonic2015.org/program>

GENERAL CO-CHAIRS
Ludger Brümmer (ZKM-Karlsruhe)
Michael Harenberg  (HKB-Bern)
Paul Modler  (HfG-Karlsruhe)
Tony Myatt (IoSR-Guildford)
Markus Noisternig  (IRCAM-Paris)
Curtis Roads (MAT/CREATE-St. Barbara)



LOCAL ORGANIZING TEAM
Ludger Brümmer (ZKM-Karlsruhe)
Paul Modler  (HfG-Karlsruhe)
Götz Dipper  (ZKM-Karlsruhe)
Marie-Christine Meier (ZKM-Karlsruhe)
Lorenz Schwarz  (HfG-Karlsruhe)
Jonas Beile (HfG-Karlsruhe)


PARTNERS
MAT // www.mat.ucsb.edu <http://www.mat.ucsb.edu/>
CREATE // www.create.ucsb.edu <http://www.create.ucsb.edu/>
IRCAM // www.ircam.fr <http://www.ircam.fr/>
HKB // www.medien-kunst.ch <http://www.medien-kunst.ch/index.php?id=1039>
IoSR // iosr.surrey.ac.uk 

City of Karlsruhe // www.karlsruhe.de <http://www.karlsruhe.de/>
Baden-Württemberg Stiftung /

[Sursound] IRCAM Artistic Research Residency Program - Call for proposals

2015-11-18 Thread Markus Noisternig
Apologies for cross-posting

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
- - - - - - - - - - -

IRCAM Artistic Research Residency Program 2017

Featuring IRCAM Residency & Joint ZKM / IRCAM Residency Tracks

The artistic research residency program is open to composers, professional 
musicians, choreographers, stage directors, sound designers, and students who 
wish to carry out their musical and artistic research using IRCAM or its 
partners’ facilities and extensive research environment. 
Upon nomination, each 
candidate will be granted a residency in an associated laboratory during a 
specific period that will range from three to six months long. During this 
period, candidates will work in association with a team/project at IRCAM or 
with a partner (this year will be marked by the first collaboration with ZKM | 
Institute for Music & Acoustics), carry out the musical and scientific 
experimental work associated with their proposed project, and participate in 
the intellectual life of the institute.
 At the end of their stay, candidates 
will be invited to share the results of their work with the international 
musical research community in the form of documentation and public 
presentations.
Call for proposals

For the 8th edition of the Musical and Artistic Research Residency, IRCAM 
invites composers and artists to submit artistic projects on novel and 
unexplored paradigms requiring collaborations with IRCAM research. The program 
is open to all artists, regardless of age or nationality, who wish to carry out 
experimental research using partners’ facilities and extensive research 
environment. An international panel of experts including researchers, 
composers, computer musicians, and artists will evaluate each application. Upon 
nomination, each candidate will be granted a residency at IRCAM (for the IRCAM 
Track) or in a partner institution (for the ZKM/IRCAM joint Track) during a 
specific period (three or six months) and in association with scientific 
team/projects. Selected candidates receive the equivalent of 1,200 Euro per 
month to cover expenses in France.  

Applications are accepted only online via the Ulysses Network: 
http://www.ulysses-network.eu/. Applicants can submit their material upon 
creation of an account on the website. The deadline for application is November 
30th 2015 (midnight Paris time).

Apply: http://ulysses.ircam.fr/web/competitions/musicalresidency2017/

An international panel of experts will evaluate all applications and final 
results will be announced in February 2016. 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Markus Noisternig
Dear All, 

I would like to add a few words to the discussion on the AES69-2015 / SOFA 
format:

AES69 standardizes the SOFA file format to exchange space-related acoustic 
data. The format is designed to be sufficiently flexible to include source 
materials from different databases and for different use cases (e.g., HRIRs, 
MIMO-RIRs, etc.).  

AES69 is split in two parts: (1) the main body of the text, wich defines 
dimensions and general rules for creating so-called conventions; (2) 
‘conventions' for a consistent description of particular setups in the annex.

A ‘convention’ defines recommendations on the naming of AES69 attributes, 
variables, and dimensions for particular application fields. In other standards 
a set of ‘conventions’ is often referred to as a ‘profile’. Conventions are 
discussed on the SOFA website (http://www.sofaconventions.org/). As soon as a 
new convention is considered as being consistent and stable, it will be added 
to the annex of the AES69 standard through the normal revision process. 

In other words, if you want AES69 / SOFA to support ATK, feel free to open the 
discussion on a new set of conventions.

Open source APIs for Matlab, Octave, and C++ are available at 
http://sourceforge.net/projects/sofacoustics/. The API provides functionality 
to create, read, and write AES69 ‘.sofa’ files. You can freely download and use 
the APIs, in whole or in part, for personal or commercial purposes.

Best regards, 

Markus

-- 
Markus Noisternig 
Acoustics and Cognition Research Group 
IRCAM, CNRS, Sorbonne Universities, UPMC 
Paris, France 
 
> On 26 janv. 2016, at 11:30, Trond Lossius  wrote:
> 
>> On 25 Jan 2016, at 01:37, Marc Lavallée  wrote:
>> 
>>> As anything simpler but functional might be sufficient and even 
>>> preferable in most cases:
>>> 
>>> - Does ATK define an HRTF interface which is sufficiently flexible to
>>> be the base for a real < standard > ?
>> 
>> Not really, but you should ask the maintainers of ATK.
> 
> I don’t think ATK makes sense as a standard. The ATK sets are pretty 
> application-specific: For each HRTF it contains 8 impulses so that each of 
> the WXYZ channels can be convolved to the left and right ear. These are 
> calculated as a reductionfrom larger sets of HRTF measurements. A general 
> HRTF measurement contains much more information (measurements for multiple 
> azimuths and elevations). As such SOFA seems to me an interesting move 
> towards a standardisation.
> 
> What would be useful though, would be a standard solution for generating 
> impulses for ATK from SOFA data.



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Job opening at IRCAM, R&D position in object based 3D audio

2016-09-06 Thread Markus Noisternig
Dear All, 

A job opening is available in our research group.
Please pass this on to anyone you think would be interested. 
And please forgive the mass mailing.

Kind regards, 

Markus


- - - - - -

Researcher/developer position (W/M)
European research project ORPHEUS

Fixed-term contract of 12 months from October 1st, 2016

Introduction to IRCAM

IRCAM is a non-profit organization that is associated to the Centre Pompidou 
(Centre national d’art et de culture Georges Pompidou). Its missions comprise 
research, production, and education related to contemporary music and its 
relation to science and technology. The R&D department of IRCAM, CNRS and 
Pierre et Marie Curie University (UPMC) are also associated in the framework of 
the STMS Joint Research lab (Sciences et technologies de la musique et du son). 
Its specialized teams are conducting research and development in the areas of 
acoustics, sound signal processing, interaction, computer music and musicology. 
IRCAM is located in the centre of Paris near the Centre Pompidou, at 1, Place 
Igor Stravinsky 75004 Paris. 


Introduction to the Orpheus Project

The H2020 European program ORPHEUS aims at developing, experimenting and 
assessing an end-to-end object-based media chain for audio-content and radio 
broadcast. Object-based media is an innovative approach for creating and 
deploying interactive, personalised, scalable and immersive content, by 
representing it as a set of individual assets together with metadata describing 
their content (e.g. number and format of individual audio sources or tracks, 
language, loudness…) and how they should be rendered (e.g. position or moving 
path). In contrast with conventional channel based formats (stereo, 5.1, 
22.2…), object-based formats can be delivered to any kind of rendering device 
including binaural over headphones or advanced multi-channel and immersive 
audio systems. This new end-to-end media chain requires innovative tools for 
capturing, mixing, monitoring, storing, archiving, playing out, distributing 
and rendering object-based audio. In the framework of the ORPHEUS project, the 
production and exchange of content will follow the Audio Definition Model (ADM) 
published by the European Broadcasting Union (EBU). 

Further information can be found on the ORPHEUS website: 
http://orpheus-audio.eu/


Role of Ircam in the project

Within the ORPHEUS project, IRCAM will focus on the implications of 
object-based formats such as ADM for the control of reverberation effects that 
are crucial for monitoring spatial parameters such as the distance and apparent 
width of the sound sources, the envelopment, etc. The aim is to investigate the 
benefits and possible limitations of the ADM format for the production and 
rendering of reverberation effects. 


Position description

For this project, IRCAM is looking for a researcher/developer with expertise in 
the domain of 3D audio. The selected candidate will design, implement and 
assess different scenarios illustrating the production, delivering and 
rendering of various audio contents making use of reverberation effects and 
encoded through the ADM format. Several approaches will be compared, either 
based on parametric reverberation (FDN) or convolution based reverberation 
using directional room impulse responses (DRIR) measured for instance with 
spherical microphone arrays (SMA).


Required experiences and skills

• High Skill in audio signal processing.
• The candidate should preferably hold a PhD in this field or prove 
strong development background in this domain.
• High-skill in sound engineering and sound spatialization.
• High-skill in C/C++ programming.
• Good skills in Matlab programming.
• Knowledge of Android and/or iOS environment.


Salary

According to background and experience.


To Apply

Please send an application letter with the reference 201607ORPHEUS together 
with your resume and any suitable information addressing the above issues 
preferably by email to: warusfel_at_ircam_dot_fr  before September 16, 2016. 





___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] distance perception in virtual environments

2011-04-17 Thread Markus Noisternig
Hi, 

Gavin Kearney et al have presented their work on "Depth perception in 
interactive virtual acoustic environments using higher order ambisonic 
soundfields" at the Ambisonics'11 symposium in Paris; the article is available 
online at http://ambisonics10.ircam.fr/drupal/?q=proceedings/o6

Best, 
Markus

On 17 avr. 2011, at 19:38, Dave Hunt wrote:

> Hi,
> 
>> Date: Sun, 17 Apr 2011 09:28:28 +0800
>> From: Junfeng Li 
>> Subject: [Sursound] distance perception in virtual environments
>> 
>> Dear list,
>> 
>> I am now wondering how to subjectively evaluate distance perception in
>> virtual environments which might be synthesized using WFS or HOA (high-order
>> ambisonics). In my experiments, the sounds were synthesized at different
>> distances and presented to listeners for distance discrimination. However,
>> the listener cannot easily perceive the difference in distance between these
>> sounds.
>> 
>> Anyone can share some ideas or experiences in distance perception
>> experiments? or share some references on this issue?
>> 
>> Thank you so much.
>> 
>> Best regards,
>> Junfeng
> 
> Change in amplitude with distance should be perceptible fairly easily, but on 
> its own would just sound the same but quieter, or louder. High frequency 
> absorption by the air is only really perceptible when the distance is fairly 
> large, though this effect could be exaggerated for artistic purposes. The 
> lateness of arrival of sound from distant objects is not directly perceptible 
> unless there is something visible (e.g. lightning and thunder).
> 
> Reverberation definitely gives perceptible distance effects. More distant 
> sources are more reverberant. The amplitude of the direct signal should 
> decrease with distance (inverse square law, or some similar law), while the 
> amplitude of the reflected and reverberant signal would remain fairly 
> constant or decrease less rapidly with distance than that of the direct 
> signal. It is the ratio of direct to reverberant sound that is important.
> 
> John Chowning's 1971 paper "The Simulation of Moving Sound Sources" is a good 
> early consideration of how to synthesise distance.
> 
> Of course the reported result will depend on the listener, who may not be 
> used to analysing sound for these effects.
> 
> Ciao,
> 
> Dave
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
> 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A submittal for a patent on Ambisonics?

2018-01-24 Thread Markus Noisternig (IRCAM)
Dear All, 

Here are some references:

Brungart and Rabinowitz [1] showed that HRTF vary significantly for sources in 
the proximity region (i.e. at distances less than 1m from the head). 
Lentz et al. [2] perceptually evaluated measured HRTFs at different distances 
from the head, showing limits of noticeable differences between near-field and 
far-field HRTFs.
Romblom and Cook [3] proposed near-field compensation filters. 
Duraiswami et al. [4], Zhang et al. [5], and Pollow et al. [6] compute HRTFs 
for arbitrary field points using spherical harmonics decomposition (as an 
extension of the work of Evans et al. [7]).
Duda and Martens [8] evaluated simulation results on a spherical head model

Have fun reading!

Very best, 

Markus

[1] D.S.Brungart,W.M.Rabinowitz:Auditorylocalizationof nearby sources. 
head-related transfer functions. J. Acoust. Soc. Am. 106 (1999) 1465–1479.
[2] T. Lentz, I. Assenmacher, M. Vorländer, T. Kuhlen: Precise near-to-head 
acoustics with binaural synthesis. Journal of Virtual Reality and Broadcasting 
3 (2006).
[3] D. Romblom, B. Cook: Near-field compensation for hrtf processing. 125th 
Conv. Audio Eng. Soc., San Francisco, USA, 2008, no. 7611.
[4 ]R. Duraiswami, D. N. Zotkin, N. A. Gumerov: Interpola- tion and range 
extrapolation of HRTFs. IEEE ICASSP, Montreal, Canada, 2004, 45–48.
[5] W. Zhang, T. D. Abhayapala, R. A. Kennedy, R. Du- raiswami: Modal expansion 
of HRTFs: Continuous repre- sentation in frequency-range-angle. ICASSP, Los 
Alami- tos, USA: IEEE Computer Society, 2009, 285–288.
[6] Pollow, M., Nguyen, K.-V., Warusfel, O., Carpentier, T., Müller-Trapet, M., 
Vorländer, M., and Noisternig, M. (2012). “Calculation of Head-Related Transfer 
Functions for Arbitrary Field Points Using Spherical Harmonics Decomposition,” 
Acta Acust United Ac, 98, 72–82. doi:10.3813/AAA.918493
[7] M. J. Evans, J. A. S. Angus, A. I. Tew: Analyzing head- related transfer 
function measurements using surface spher- ical harmonics. J. Acoust. Soc. Am. 
104 (1998) 2400– 2411
[8] R. O. Duda, W. L. Martens: Range dependence of the re- sponse of a 
spherical head model. J. Acoust. Soc. Am. 104 (1998) 3048–3058.


> On 24 Jan 2018, at 15:01, John Merchant  wrote:
> 
> Tom Smurdon and Peter Stirling of Oculus presented research on near-field 
> HRTF for VR at last fall's OC4. The video of that talk is available here:
> https://www.youtube.com/watch?v=l7mhXRB9PA4
> 
> 
> From: Sursound  on behalf of 
> st...@mail.telepac.pt 
> Sent: Tuesday, January 23, 2018 8:12 PM
> To: Surround Sound discussion group
> Subject: Re: [Sursound] A submittal for a patent on Ambisonics?
> 
> Citando Augustine Leudar :
> 
>> Hi Jack,
>> 
>> Aside from ILDs, ITDs, I also wondered if the pinna was able to distinguish
>> 
>> very close sound sources due to the fact the wavefront would be much more
>> 
>> curved almost spherical to the degree that it would be different pressure
>> 
>> present at different folds of the pinna (ie  very close up  sound slike a
>> 
>> mosquito) . I dont think theres been much done on that...
> 
> Hi Augustine,
> 
> I think "there has been done quite a lot on that"... 😉
> 
> (Reproduction of near-field audio sources)
> 
> Beside of spherical waves (and their consequences) we should not
> overlook that any high-frequency emitting (annoying) mosquito next to
> your left ear would be heard much softer at your right ear, the head
> shadow being even more relevant at close distances.
> 
> BR
> 
> Stefan
> 
> P.S.: It is important to know about the "depth" of a mosquito audio
> object relative to your head, both in VR and in real life...
> 
>> On 23 January 2018 at 11:58, jack reynolds 
>> 
>> wrote:
>> 
>>> It looks like a method for binaural rendering with multiple distance HRTFs.
>>> 
>>> 
>>> 
>>> Ambisonics could be one of the inputs, but it seems to be aimed more at
>>> 
>>> object based virtual reality, where the listener is more likely to come
>>> 
>>> very close to an audio source.
>>> 
>>> 
>>> 
>>> Most HRTFs are currently measured at 1m distance, so any objects closer
>>> 
>>> than 1m are not currently rendered correctly.
>>> 
>>> 
>>> 
>>> Far field HRTFs are closer to plane waves, whereas close up audio objects
>>> 
>>> emit more spherical waves, creating greater differences in interaural time
>>> 
>>> difference (ITD).
>>> 
>>> 
>>> 
>>> Jack
>>> 
>>> 
>>> 
>>> On 23 January 2018 at 11:18, Bearcat Şándor 
>>> 
>>> wrote:
>>> 
>>> 
>>> 
>>> I don't know a lot about patent law, but is this an attempt to tie up our
>>> 
>>> beloved Ambisonics?
>>> 
>>> 
>>> 
>>> http://www.freepatentsonline.com/y2017/0366912.html
>>> 
>>> 
>>> 
>>> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu

[Sursound] 2nd annual Spatial Audio Summer Seminar @ EMPAC in partnership with IRCAM and CCRMA

2018-04-26 Thread Markus Noisternig (IRCAM)
Dear Sursounders, 

We are happy to announce this year’s Spatial Audio Seminar at EMPAC / RPI. 
Using the full capabilities of EMPAC’s sonic infrastructure, including over 700 
channels of audio and EMPAC’s Wave Field Synthesis array and 64 channels HOA, 
the seminar will consist of lectures, roundtables, listening sessions, 
workshops, and performances. See below for further details.

We hope to see you at the seminar.

Best regards, 

Markus


= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
2nd annual Spatial Audio Summer Seminar at EMPAC
in partnership with IRCAM and CCRMA 
July 9-22, 2018, Troy, NY
http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar 
<http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar>
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC) at 
Rensselaer Polytechnic Institute is pleased to announce the second annual 
Spatial Audio Summer Seminar July 9-22. Co-presented by the Paris-based 
Institut de Recherche et Coordination Acoustique/Musique (IRCAM) and Center for 
Computer Research in Music and Acoustics (CCRMA) at Stanford University, the 
intensive seminar is a rare-opportunity for musicians, composers, programmers, 
and audio engineers to study the fundamentals of multichannel spatial audio in 
pristine acoustic environments. Participants will experience multiple large 
spatial audio systems, including Wave Field Synthesis, High-Order Ambisonics, 
and Binaural audio.

Researcher Markus Noisternig (IRCAM), professor Chris Chafe (Stanford), and 
other guests will join EMPAC’s audio staff in dissecting technical and artistic 
concerns in the creation and presentation of high-count multi-channel sound 
projection. Using the full capabilities of EMPAC’s sonic infrastructure, 
including over 700 channels of audio and EMPAC’s Wave Field Synthesis array, 
the seminar will consist of lectures, roundtables, listening sessions, 
workshops, and performances.

The first week of the seminar is an open forum for participants of all 
backgrounds and experience levels to dive into general concepts, workflows, and 
control mechanisms related to spatial audio. Topics will include introductory, 
intermediate, and advanced patching for IRCAM’s SPAT software, in-depth 
discussions of Wave Field Synthesis, Ambisonics, Binaural and Transaural audio, 
3D audio recording and mixing, and more.

The second week of the seminar gives participants focused time and hands-on 
access to these systems in order to develop new creative work. This portion of 
the workshop will be reserved for only a handful of participants who submit 
project proposals in advance. Submissions will be reviewed by workshop leaders 
and accepted based on the degree to which they utilize the capabilities of 
these spatial audio systems and the potential to realize the proposed project 
within the given time frame. 

http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar/spatial-audio-seminar
 
<http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar/spatial-audio-seminar>

The workshop will take place throughout the EMPAC building, granting 
participants access to a range of sophisticated audio systems in a variety of 
acoustic settings. Three venues will be outfitted with high-channel-count audio 
arrays, including: 
> a 1,200-seat Concert Hall with a 64-channel Ambisonic array; 
> a large absorptive studio (66’x51’x33’; 315m2, 12m high) with a 186-channel 
> Wave Field Synthesis Array and 25-channel Ambisonic array; 
> a large diffusive studio (44’x55’x18’; 230m2, 9m high) with a 186-channel 
> Wave Field Synthesis Array.

Online registration will open on March 1. 

http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar/wave-field-synthesis-workshop
 
<http://empac.rpi.edu/events/2018/spring/spatial-audio-summer-seminar/wave-field-synthesis-workshop>

For more information, please visit empac.rpi.edu. For press inquiries, please 
contact Josh Potter at pott...@rpi.edu. 

The Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC) at 
Rensselaer Polytechnic Institute is where the arts, sciences, and technology 
interact with and influence each other by using the same facilities and 
technologies, and by breathing the same air. EMPAC hosts artists and 
researchers to produce and present new work in a building designed with 
sophisticated architectural and technical infrastructure. Four exceptional 
venues and studios enable audiences, artists, and researchers to inquire, 
experiment, develop, and experience the ever-changing relationship between 
ourselves, technology, and the worlds we create around us. EMPAC is an icon of 
the New Polytechnic, a new paradigm for cross-disciplinary research and 
learning at Rensselaer, the nation’s oldest technological research university. 

--

[Sursound] Two open (post-doc) researcher positions at IRCAM / STMS-Lab

2023-10-13 Thread Markus Noisternig (IRCAM)
Hello list, 

As part of the CONTINUUM project, funded by the France 2030 Programme, IRCAM is 
currently looking for two (post-doctoral) researchers to join on an 18-month 
fixed-term contract.

Application deadline is October 30th.

Researcher in 3D audio and audio signal processing (referencing 
CONTINUUM/PD-REV)
https://www.ircam.fr/job-offer/chargee-de-recherchedeveloppement-specialiste-en-audio-3d-et-traitement-du-signal-audio-reference-continuumpd-rev

Researcher in 3D audio signal processing and room acoustics (referencing 
CONTINUUM/PD-AA)
https://www.ircam.fr/job-offer/chargee-de-recherchedeveloppement-specialiste-en-traitement-du-signal-audio-3d-et-en-acoustique-des-salles-reference-continuumpd-aa

Best regards, 

Markus

—
Markus Noisternig
Head of Music Research
Researcher in Acoustics and Audio Signal Processing
STMS Lab IRCAM - CNRS - Sorbonne University - Ministry of Culture
1, place Igor Stravinsky, 75004 Paris, France
+ 33 1 44 78 - 16 01 | https://www.stms-lab.fr | https://www.ircam.fr

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20231013/5a9b876f/attachment.htm>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.