Thanks Sampo and thanks Tim for the link, will investigate.

The project I'm trying to get up and running will involve taking impulse
responses with a soundfield mic in a variety of outdoor environments.
While it was never absolutely necessary to simulate moving sound sources
with convolution/decoding (and I plan to take impulse responses at
several locations for each chosen environment and mic position to allow
the discreet projection of a number of sources), I was curious to know
if it might be possible simulate moving sources following a convolution
approach. 

Dave mentioned modelling of IRs as an alternative to interpolation - but
this he mentioned would be based on not only the impulse responses
taken, but on the architecture. These are wilderness environments - so I
guess modelling won't be possible and I imagine the number of IRs needed
would be high anyway.

If I do implement moving virtual sources - what I think I'd do, would be
to make 4 nearfield IRs in north, south, east, west locations for
example and an additional 4 IRs at a greater distance. Spatialisation of
virtual sources would be done via ambisonic encoding with something like
"iem_ambi" in puredata. In addition, a single IR would be chosen on the
basis of its general proximity to the movement of the object, convolved
with the sound (eg. using jconvolver) and mixed with the ambi-encoded
signal to give it some ambience. Without ruling out moving sources
however, and if the interpolation and the modelling of IRs are out of
the question, perhaps the nearest IRs can be selected as the virtual
source moves from one region to another, and the convolved signal,
cross-faded to provide a smooth transition.

Perhaps this is the most practical approach?

Cheers,

Iain

 


Em Sex, 2013-05-03 às 19:23 +0300, Sampo Syreeni escreveu:
> On 2013-05-03, Iain Mott wrote:
> 
> > Theoretically then, for a given listening position (for which we 
> > position a mic in order make impulse responses), if we make impulse 
> > responses for every possible location in the space, it would be 
> > possible to spatialise a sound with both angular and distance cues, 
> > through a process of convolution with the various impulse responses 
> > and subsequent ambisonic decoding?
> 
> Yes. The main difference to binaural work is that the responses are 
> considerably longer and capture all of the room acoustical cues as well. 
> That means they are even less safe to sum to each other than HRTFs, and 
> harder to decompose, so interpolating between them is likely out of the 
> question. Also, at progressively higher orders you start to capture some 
> spatial detail as well, which would eventually lead to proper auditory 
> parallax when the channel count goes into the hundreds -- a potential 
> further cue. Unfortunately we're nowhere near anything like that at the 
> moment. Sources inside the rig are also a bit problematic because their 
> nearfield has considerable amounts of higher order energy, which reject 
> with variable (and often unquantified) success.


_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to