> > Dear Jörn,
> > 
> > did you consider using Wave Field Synthesis? WFS has no 
> problems with 
> > irregular setups, as far as no large gaps (due to doors 
> etc.) are in 
> > the array. And it fully supports model-based rendering, which means 
> > that you can easily generate loudspeaker feeds for virtual 
> sources at 
> > arbitrary positions. You might want to give the SoundScape Renderer
> > (SSR) a try http://www.tu-berlin.de/?ssr in such a setting. It 
> > supports real-time model-based rendering with WFS, 
> amplitude panning 
> > Ambisonics, and vector-base amplitude panning (VBAP) 
> (...and binaural 
> > synthesis/BRS).
> 
> i haven't quite understood how very sparse WFS systems are 
> supposed to work (IOSONO presented one with 1m tweeter 
> distance at the tonmeistertagung), so i didn't consider it 
> for this application.
> 
> will a circle of eight systems produce anything meaningful 
> when driven via WFS? and if so, why would anyone still call it WFS?

For such a setup VBAP might be a good choice. The question is disputable what 
technique is connected to the term WFS...

> i need to get a windows box anyway, so i'll have a look at SSR asap...
> sounds like it allows for easy A/B comparison between the 
> different rendering methods?

It currenly runs only under Linux. However, we are working on a Mac OS (and 
perhapts also an Windows) port, which should be quite straigforward due to 
jack. Yes, you can do a A/B comparison of methods. However, the rendering 
method is choosen at the startup of the SSR. So you would have to restart the 
SSR for such a comparison or have two running in parallel and switch the jack 
connections.


greetings,
Sascha
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to