Hi Aaron, Many thanks for sending the link to JASA Express Letters. Readers of this mailing list may question why an article pertaining to cochlear implant (CI) patients is relevant to Ambisonics. I’ll get to this a few paragraphs down.
It is no surprise that interaural timing differences (ITDs) would be lost with CI users. All of the 22 or so electrodes along an implanted array cannot be energized simultaneously; this would result in current smearing. To avoid this problem, methods of “channel-picking” and interleaving (multiplexing) are used. The incoming signal is analyzed, and six (typical) of the frequency bands with the greatest or most relevant energy levels are sent to their respective electrodes. This processing takes time; a lot more time, in fact, than the time it takes sound to travel from one ear to the opposite ear. Skewed timing issues may impart other deleterious effects. As an example, normal-hearing listeners don’t perceive each and every reflection in a reverberant environment as a sound apart from its original source. If we perceived each reflection as a distinct sound, listening in reverberant environments would be quite difficult. The brain has its way of putting information together. The Haas effect depends on timing cues as well. Hearing loss creates problems that go well beyond mere threshold elevation (loss of frequency discrimination being a problem, too!). Many of the articles on localization ability with CI users probably underestimate how difficult it is for CI users to localize sound because stimuli are generally presented in the frontal plane (or hemisphere) only. From what I learned (via personal communication) at a conference on implantable auditory prostheses (CIAP), things really “fall apart” when stimuli are presented behind the listener. Data collected on rearward plane localization ability might result in little meaningful information. A friend and former colleague of mine is doing her doctoral dissertation on localization ability using binaural EAS patients (localization ability being just a part of the study). EAS stands for electro-acoustical stimulation: These CI users have residual low-frequency hearing that may be augmented with a hearing aid, or they may have near-normal thresholds at the extreme low end of the audible spectrum. Low-frequency hearing is where ITDs are effective. Interaural level differences (ILDs) are minimally effective or do not exist at low frequencies because low-frequency sounds diffract around the head. (By the way, Aaron, I know you already understand this: I’m providing this info here for other readers.) One of several variables than can affect localization ability using ILDs is compression of the signal. Compression is routinely used in hearing aids and is almost mandatory for CI processing. A robust signal presented to one side of a CI user’s head may get compressed, while the attenuated signal (reduced via head shadow) on the listener's opposite side doesn’t get compressed. Consequently, the signal could appear equally “intense” (or perceptively “loud”) at both ears. The article on CI patients’ localization using ILDs by J. M. Aronoff, D. J. Freed, L. M. Fisher, I. Pal, and S. D. Soli (link provided below) makes mention of a CI’s (electrical) pulse-width as a possible factor when it comes to ITD distortions. I found this interesting because my CI design uses a constant pulse width and real-time processing. Briefly, both pulse width and pulse amplitude are varied in typical CI designs. The amplitude (voltage) is restricted based on safety, and the width’s range is limited by its usefulness. The nerve action potential (AP) is different: It’s all or nothing, and mostly the same amplitude and duration. Firing rate, place, and the number of nerves being innervated dictate loudness and pitch. My CI design uses pulse-width modulation (all in real-time) and doesn’t require interleaving to work: For these reasons, it may provide improved localization ability. Re Ambisonics (finally): I’m looking for mentorship; specifically, I’d love to hook up with someone who’s into music, hearing science and, of course, Ambisonics. I’ll be the first to admit that where I began my doctoral studies was a poor match-up. The department was hardware-phobic. If a project required more than MATLAB and a keyboard or mouse as the response box, it was viewed with a dubious eye. Furthermore, one outside person with some “clout” proclaimed that Ambisonic recordings with a Soundfield mic sounded “muddy” to him. More than one professor unfamiliar with Ambisonics took his word for this (re muddy Ambisonics) without taking a listen, but failed to acknowledge that the person purporting "mudiness" had a severe hearing loss (wore hearing aids) and was marketing a wholly different “surround” system aimed at audiologist and hearing researchers. Pretty much everything I offered was frowned on (or, more accurately, ignored). I am a physicist with years of electronic design experience, love hardware (and building things), have designed and built ergonomic response boxes, and have a great desire for external validity when it comes to hearing research. This is good for some, but a disaster for others. I believe simulations of real-life scenarios can be created using Ambisonic techniques. (Note: HRTFs under headphones are NOT an option when dealing with HA and CI users). The people I alluded to above are highly regarded researchers: I just have “different” ideas and I’m an out-of-the-box thinker. Furthermore, I am an optimistic person who strives to move forward and help others. I really appreciate all the help I’ve received on this Sursound list. If anybody out there would like to collaborate on projects, or can steer me in a purposeful direction, then help and experience is always welcome. To date, all of my lab equipment and research endeavors have been paid for out of my own pocket (this includes paying research participants). I have a decent arsenal of audio and lab equipment, and want to put my brain to good use. If I haven’t already said, the article Localization in Horizontal-Only Ambisonic Systems by Eric Benjamin, Aaron Heller, and Richard Lee is relevant to some of my current research ideas: Thanks Eric, Aaron, and Richard. Best to All, Eric Here’s the link to the article titled Cochlear Implant Patients’ Localization Using Interaural Level Differences Exceeds that of Untrained Normal Hearing Listeners. Access is free. http://asadl.org/jasa/resource/1/jasman/v131/i5/pEL382_s1?view=fulltext -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://mail.music.vt.edu/mailman/private/sursound/attachments/20120509/ebc234ef/attachment.html> _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound