Thanks for responses. I think I'll try simplest thing first. So I'd like to have single sound source that will rotate with head (Z axis) - it can be done with rotation matrix. I've written this myself using streaming in openal and binaural decoder from HoaLibrary, but then I want to add more sources and when everything will work I will go to next step - distance filters (kind of moving sources).
There is a lot to do with loading, streaming, decoding etc. audio files and I bet that someone has already done that. Searching for c/c++ library on web I encountered for example SSR (http://spatialaudio.net/ssr/) with APF (http://audioprocessingframework.github.io/) and matlab ones, but still don't know how the former works and whether it fit my needs (somebody could help me with figure this out). It'll be cool to start work with some library instead of writing all by myself, so I'll be grateful for pointing me to some links. Dnia 13 marca 2015 12:19 Jörn Nettingsmeier<netti...@stackingdwarves.net> napisał(a): > On 03/09/2015 12:12 PM, Tobix wrote: > > > I've read that ambisonics is good for listener in center, right? This > > means that if player can move the sound effect will be distorted? > > If you're using pre-rendered Ambisonics files, the listener will never > move from the sweet spot, translations are impossible. What you do is > track the rotations of the listener's head and rotate the rendering > accordingly. > > If you want to do translations, you will have to render the scene in > realtime. > It's very much like 3D cinema: you can produce fixed content for a > pre-defined viewpoint with a pair of spaced cams, but if you want to > allow the viewer to move, you need to model the whole scene. > > > The way that openal handles source positions and listener is good for > > me, but could it be reproduced with ambisonics? > > Yes. Ambisonics can just as well be used as a realtime rendering format. > But there is a tradeoff: if the number of discrete sources is small > compared to the number of virtual speakers, direct rendering is cheaper. > > Consider the case of a virtual 3rd-order 3D rig, let's assume an > icosahedron. The cost of decoding the 16ch B-format to 20 speaker feeds > is negligible, but you will have to convolve those with 20 pairs of > HRTFs, tracked in realtime. > > This rendering effort will be constant, regardless of the number of > sound sources in your scene. So if it's just a few, it's easier to just > convolve each source with the two HRTFs. At 20 sources, you're > break-even, above that, 3rd order Ambi is cheaper. > > The situation changes a bit if you consider the diffuse field for > reverb/ambience: it can be mixed into the Ambi signal at no extra cost, > but if modeled with individual sources, it's expensive, because you need > quite a few. > > > Best, > > > Jörn > _______________________________________________ > Sursound mailing list > Sursound@music.vt.edu > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit > account or options, view archives and so on. > _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.