For Sursound subscribers, the ideas of virtual microphones and binaural 
recordings via Ambisonic recordings is old news. But there are a lot of hearing 
scientists, sound designers, recording engineers, and surround-sound 
enthusiasts who are not familiar with Ambisonics and what it offers.
One way of using Ambisonics for research requires no loudspeakers at all (which 
is a plus when dealing with budget and limited space). I just sent the email 
(below) to colleagues who are not familiar with Ambisonics, VVMic, etc. with 
the hope that they find the idea purposeful. People on this list may have 
suggestions as how to improve on my idea. Thoughtful ideas and constructive 
criticisms always welcome.
Best,
Eric

Greetings to All,
I hope you will take a moment to review this email, as I believe the idea 
outlined herein has a lot of research potential. I have also provided links to 
sample recordings that I made yesterday (November 7).
Imagine that we had the technology to create a "perfect" acoustical replication 
of restaurant noise (3-D, of course), and we wish to use this acoustical 
replica to simulate cochlear implant (CI) or EAS listening in a 3-D listening 
environment. One thought might be to put a mock processor or HA body on a 
listener's head, and vary the mic settings (omni, directional, etc.). The mic's 
output would next be routed through a vocoder or other simulator, and the 
resulting electric signal would be routed to an insert phone placed in the 
normal-hearing listener's ear. Now the listener is free to move his/her head in 
our controlled 3-D listening environment while, at the same time, make 
adjustments to the mic's pickup pattern (as well as processor settings).
Two (of several) obvious flaws with this idea are: 1) We don't have a perfect 
replica of a 3-D listening environment, and 2) It would be impossible to block 
out the acoustical stimulus, even with the use of foam plugs and insert phones.
We could correct for the second problem by simply recording what comes through 
the HA mic or mock CI processor's mic, but then we'd have to make separate 
recordings for every mic pattern as well as head position. We could have used a 
KEMAR from the very start, but then we couldn't have captured the same live 
"event" each time we switched to another microphone type or head position.
I believe the best solution can be obtained from mathematical extraction from 
Ambisonics-recorded surround sound wav files. When done correctly, all 
directional information is contained within an Ambisonics recording. All that 
is needed, then, is a way to create virtual microphones that can be steered in 
any direction (in the digital, not acoustical, domain). This is relatively easy 
to do via VVMic, and first-order pickup mic patterns are also a snap to create. 
What one has to be careful of is that mic polar responses are 3-D, too (noting 
that most polar plots are shown in 2-D). With the right software (which I'm 
working on), we can add head tracking. Head tracking is already popular when 
used with binaural recordings. But you have to look closely at what's being 
proposed here: I am creating ELECTRIC (2-D) listening as it might be perceived 
in a 3-D environment. The L or R channel of a binaural recording will roughly 
give the equivalent of an omni mic
 placed at or in the ear canal, but this hardly qualifies as a realistic 
simulation of CI mic placement or type. What I am proposing will provide a lot 
more flexibility and CI realism than an omni mic placed at the concha. When 
more than one mic or mic pattern is used, as would be the case for EAS, we can 
just as easily create a variety of mics, mic orientations, and mic polar 
responses from a single master recording. No speakers are needed, so there's no 
speaker distortion, combing effects, or other anomalies. Here's an example of 
what can be done:

I made a recording in a cafeteria yesterday. The unprocessed recording can be 
heard by going to

www.elcaudio.com/research/unprocessed.wav

Using appropriate software (VVMic, Harpex, and other VSTs) to steer a virtual 
cardiod mic away from me (the talker) yields the wav file below 
(directional_mic_01.wav). Note that this isn't merely deleting a channel or 
channels, nor is it panning L or R. I can steer the mic up, down, sideways, 
front, back, or any direction in 3-D space that I wish.

www.elcaudio.com/research/directional_mic_01.wav

Steering the virtual (cardiod) mic towards me when using the same 
master-recording segment (for direct comparison to the above) yields the 
following wav file:

www.elcaudio.com/research/directional_mic_02.wav

The next step is adding the mic's response and directional characteristics to 
an HRIR so as to include head shadow.  At present, I can turn the master 
recording into a binaural recording using over 50 HRIRs (via Harpex), or I can 
create an unlimited number of microphones and mic directions along with 
1st-order pickup patterns (e.g. VVMic), but not both mikes + HRTF (other than 
omni mics at the concha). I believe this can be worked out mathematically, and 
the outputs of multiple virtual mikes plus appropriate processing could be used 
to create the ideal CI or HA simulations using real-world scenarios (obtained 
from my library of live Ambisonic recordings).

Best,
Eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20121107/4646a4f4/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to