Greetings All,
As with many *coincidences* in the universe, the recent discussions regarding 
the quaudio mic were informative and timely. From what I gathered, there are 
certain mic techniques and algorithms that are better suited for the recording 
of STATIONARY sound sources than for MOVING sources. This, then, made me 
realize that recording moving sound sources is no trivial task; particularly 
when it comes to the reconstruction of an auditory event.
I have the idea that certain sounds can be separated from noise by not only 
their spectral characteristics and spatial location, but also by their 
perceived motion. I’m not referring to judging the distance of the moving 
object, or its direction of travel. An analogy to vision would be camouflage: 
Despite relatively poor vision, I often detect wee critters such as lizards, 
toads, and insects because of their motion. There’s no way I could otherwise 
see them against a backdrop of similarly colored terrain. The first step 
towards identifying the critter is realizing that it’s there to begin with.
I now have a reason for incorporating auditory *motion* in a research project. 
Maybe a few of you would like to join in or provide assistance. I’ll confess 
that I could use a project to get me a step closer to acceptance into a 
doctoral program. Furthermore, there’s a conference happening October 15-18 in 
Toronto: It’s the 8th Objective Measures Symposium on Auditory Implants. Their 
theme is ‘Unique people, unique measures, unique solutions’ reflecting a 
collective goal of providing the best hearing for persons needing an auditory 
prostheses (= cochlear and brainstem implants). Below are a few of my ideas 
(egad!) and thoughts:
I know I’ve said this more than once, but I’m not too keen on presenting 5-word 
sentences presented in a background of pink noise as an *objective* measure of 
cochlear implant (CI) efficacy. This is may be objective in telling us how well 
a person performs while seated in a surround of pink noise and listening to 
nonsensical sentences, but so what? I’ve been hoping to present or propose a 
slightly *better* yardstick, even if there’s no past or standard reference to 
pit my data against. I had previously proposed adding video to complete the 
AzBio, IEEE, and SPIN sentences (currently used for speech audiometry), and I 
know of at least one doctoral student who has taken this to heart. What I now 
propose (with video) are sound-source identification tasks that can be 
*objectively* scored.
Simple sounds may not be readily identifiable by the hearing impaired. This 
isn’t new news, but how well stationary and mobile sounds can be identified by 
an implant wearer could be of value, particularly when designing implant 
algorithms. Imagine listening to several sounds through a 12-channel (max) 
vocoder. This roughly approximates CI listening. Your pitch discrimination 
ability is largely shot to hell, and dynamics would be compromised if 
compression were also added. Sounds emanating from various sources would be 
blurred, but hopefully your binaural sense of direction provides some 
signal-from-noise segregation. You still detect rhythm... at least for 
repetitive sounds.
Given the above, we might go a step further (in the direction of Ecological 
Psychology) and ask whether we can tell a difference from tearing or breaking 
sounds, water drops from other temporal-patterned sounds, or rolling sounds 
from steady-state noises. Wind, although around us and a type of motion, is 
stationary relative to, say, a rolling object. When heard through a vocoder, 
they may be indistinguishable... unless the perceived motion of the rolling 
object provides useful information. Given a closed set of choices to choose 
from (and perhaps visual context), we could determine *objectively* how well we 
identify sounds presented in a background of other sounds. The latter is the 
*new* part: Can we segregate and then identify, sounds because of their motion, 
spectral make-up, etc. despite minimal or distorted information?
I would prefer to create stimuli from real-world sounds, though panning 
monaural sounds could be of some help. I like the *naturalness* of Ambisonic 
recordings, but now question how well they can be reproduced. I know that there 
are recordings of airplanes and helicopters (recorded by Paul Hodges, John 
Leonard orAaron Heller? I can’t find names/recordings online), so I have no 
doubt that Ambisonics is a viable method of recording moving sound sources. I 
am, however, concerned about the limitations, and how many sounds (to include 
sounds’ reflections) can be reproduced without raising doubt as to the 
*accuracy* of the playback.
I believe this is a do-able project that could provide meaningful information. 
Fine tuning implants to deal with an *outside* world could be different from 
the algorithms used to perfect speech understanding.
Best,
Eric C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20130423/d415c366/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to