Greetings to All,
I was looking at the web link that David M. provided regarding inertial 
transducers, and wondered whether placing one of these devices on my living 
room wall would approximate the elusive infinite-baffle loudspeaker (I’m 
recalling articles written by the great Harry Olson). But considering that I 
live in a duplex and that I’m already the neighborhood’s designated mad 
scientist, I probably should avoid such experimentation.
As far as inertial devices and the hearing sciences go (I’ll get to the topic 
of surround sound, too), there was a bone-conduction device that was intended 
to compete with cochlear implants. My understanding was that an ultrasonic 
carrier frequency (around 60 kHz and transmitted via bone conduction) was 
modulated with the speech signal. I don’t know whether the speech was presented 
as pulses (such as neural pulses) or as analog sound. For normal-hearing users, 
this would be akin wearing a conventional bone-conduction (or inertial) 
transducer. The ultrasonic carrier would be filtered out via inertia (as well 
as undetected by the inner ear), leaving only the lower-frequency speech signal 
to be heard. But for persons with non-functional cochleas, some other mechanism 
would have to be at work. I believe the company making the device was 
German-based and went by the name Hearing Innovations. They set up shop in 
Tucson, AZ (and perhaps other cities), but
 seemed to quickly disappear. Not sure what the story was, or whether the 
device worked. On to the topic of binaural listening...
From a theoretical perspective, headphone listening could be quite real because 
real-world listening is ultimately a function of the one-dimensional pressure 
changes impinging on the eardrums. Recent posts suggested that headphone 
listening can’t provide the same stimuli or experience as real-world listening, 
particularly for low-frequency sounds. One very important aspect of 
low-frequency (acoustic) hearing is how much it can contribute to speech 
understanding when combined with electrical hearing (meaning implanted 
electrodes, or cochlear implants). I’ve written on this topic in past posts, 
but I will repeat that the combination of acoustic and electrical hearing 
results in an improvement in speech understanding that far exceeds the sum of 
the individual modes' individual contributions to speech scores. When it comes 
to cochlear implants, the question is how much benefit does this combination of 
modes (acoustic plus electric stimulation) provide in
 noisy or reverberant environments? People with cochlear implants have great 
difficulty hearing in noise, so improvements in this area are of great interest.
I have proposed studies using a surround of noise to investigate the efficacy 
of EAS listening in real-world environments. To date, studies have shown 
improvement in speech understanding using EAS or EAS simulations (namely 
normal-hearing listeners donning earphones) combined with speech babble and 
artificially-generated reverberation (stereo or monaural). One could argue that 
providing natural, multi-directional reverberation (via a surround of speakers) 
in the sound field would be an easier listening task than listening under 
earphones. In a surround of uncorrelated background noise, we could use our 
ability to localize sounds (assuming this can be done with implantable 
prostheses) to segregate the speech from noisy or reverberant background 
sounds. But I am decidedly against the idea that the research outcomes or 
processing strategies optimized for headphone listening are valid for the 
majority of real-world listening situations, even when the
 headphone listening task is the more difficult of the two.
Here’s an analogy. One of the few physical feats I can perform is riding a 
unicycle. I also used to set up obstacle courses for unicycle riding that were 
quite rugged. For most, riding a unicycle over rugged terrain is arguably more 
difficult than learning to ride a bicycle. However, I am not remotely adept at 
performing stunts on a bicycle despite demonstrating an ability to stay upright 
on a single wheel. Both activities involve balance, wheels, and pedaling, but 
the ability to ride a unicycle does not translate to bicycle handling skills. 
What I experience while listening under headphones may require more 
concentration than real-world listening (it depends on the task), but that 
doesn’t mean that my ability to concentrate on words or sentences in noise 
while listening under earphones has real-world application or translation. 
Listening under headphones with pink noise as the background noise may prove to 
be more difficult than listening in a
 surround of speakers with speech noise. But finding methods or processing 
strategies that improve the speech scores for a difficult task doesn’t 
necessarily translate to improving one’s ability to do well while attempting 
the conceivably simpler, real-world tasks. If I wish to study improvements in 
processing strategies or electric 
plus acoustic hearing in reverberant spaces, I need to do these using 
controlled but real-world listening environments. Live, learn, and enjoy life 
with a surround of loudspeakers. Long live Ambisonics.
Best to All,
Eric C.
PS--I often wear headphones 2 to 4 hours a day, but not because they give me a 
real-world experience. Headphones do have their place and can certainly provide 
listening enjoyment as well as privacy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20130205/2d7f717a/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to