Greetings,
I would like to model microphone pickup patterns in conjunction with HRTFs and 
Ambisonic recordings that I've made. To give a specific example, I would like 
model a miniature supercardiod mic, pointed forward, that is located proximal 
(or superior) to the pinna. This would be akin to a directional mic on a 
hearing aid or CI processor. The HRTF can be approximate, as the mic is likely 
to be placed slightly above the pinna and close to the head, not right at the 
opening of the ear canal. Some mics, however, are located in the concha, so the 
IRs from the Listen Project would approximate the mic placement, but not the 
mic's polar pattern.
I have recordings of cafeterias and public spaces that I made using a TetraMic. 
VVMic allows me to create first-order mic patterns that can be rotated in 
space. This alone is useful, but does not include the acoustic shadow that 
would be created by a hearing aid wearer's head. I have the Harpex VST, too. 
Harpex includes the HRIRs from the Listen Project (Svein, please correct me if 
I'm wrong on this), thus making binaural simulations a snap. But to get an HRTF 
that includes a specific mic pick-up pattern is a little trickier.
I had initially used VVMic to create the mic pattern I wanted, and then aimed 
it to the direction I wanted. Input was B-formatted wav files. The resulting 
output is a single channel, or N identical channels if I want to create N 
tracks. I created 4 tracks and used these as pseudo B-formatted material in 
Harpex.
The other "order" would be to create a stereo (binaural) output via Harpex from 
the original (authentic) B-formatted material. Then one of the two channels, L 
or R, could be made into four identical tracks that can be fed to VVMic to get 
the intended polar response. The four tracks, of course, are not B-format.
A bit of head scratching tells me neither method outlined above is correct. At 
least the binaural output from Harpex should be equivalent to an 
omnidirectional mic placed at an ear's concha (ITE hearing aids), and that 
could be used for simulations of electric listening. But I'd really like to 
model hybrid devices that combine both electric (cochlear implant) and acoustic 
(hearing aid) stimulation. It seems that using Ambisonic recordings without the 
need for loudspeakers would be an elegant way to simulate CI listening in 3-D 
environments, but using normal-hearing listeners.

Regarding my recent post (vestibular-auditory interactions and HRTFs): Thanks 
to Peter L. for making my clumsy wording clearer and to Dave M. for making the 
idea more direct and to the point. I have to be careful when referencing the 
anatomical horizontal plane versus the horizontal plane that lies perpendicular 
to gravity. Although a bit off topic of Ambisonics, the post did directly 
relate to spatial hearing. Because it's easy to do virtual mic rotations with 
Ambisonic material, Ambisonics could be a useful tool for studying 
vestibular-auditory interactions.
Thanks to everyone,
Eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20121105/7ac6a6d2/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to