On 03/09/2015 12:12 PM, Tobix wrote:
I've read that ambisonics is good for listener in center, right? This means that if player can move the sound effect will be distorted?
If you're using pre-rendered Ambisonics files, the listener will never move from the sweet spot, translations are impossible. What you do is track the rotations of the listener's head and rotate the rendering accordingly.
If you want to do translations, you will have to render the scene in realtime. It's very much like 3D cinema: you can produce fixed content for a pre-defined viewpoint with a pair of spaced cams, but if you want to allow the viewer to move, you need to model the whole scene.
The way that openal handles source positions and listener is good for me, but could it be reproduced with ambisonics?
Yes. Ambisonics can just as well be used as a realtime rendering format. But there is a tradeoff: if the number of discrete sources is small compared to the number of virtual speakers, direct rendering is cheaper.
Consider the case of a virtual 3rd-order 3D rig, let's assume an icosahedron. The cost of decoding the 16ch B-format to 20 speaker feeds is negligible, but you will have to convolve those with 20 pairs of HRTFs, tracked in realtime.
This rendering effort will be constant, regardless of the number of sound sources in your scene. So if it's just a few, it's easier to just convolve each source with the two HRTFs. At 20 sources, you're break-even, above that, 3rd order Ambi is cheaper.
The situation changes a bit if you consider the diffuse field for reverb/ambience: it can be mixed into the Ambi signal at no extra cost, but if modeled with individual sources, it's expensive, because you need quite a few.
Best, Jörn _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.