Thanks for your replies, Fons. > If your studio is to be used for hearing research you should > probably ask yourself if you want this sort of processing - it > sort of 'interpretes' the spatial information in a way that > is not at all related to how our brains do it.
Do you think it depend on the type of research? The purpose of the room is not to simulate how the brain works, but to simulate an auditory scene – in a way that the brain can then use "as if it were real". So to put it another way, because someone in the room would be using their own brain to "interpret spatial information", the question is whether the TOA processing you describe above would clash at all with a person's own interpretation of spatial information – or would a person be able to interpret the sound "as if it was real" (allowing for the obvious absence of other sensory stimulus!). Or if it not "real", what would they be missing out on? What type of "false information" might TOA give them? Leading on from this question, is it even (practically) possible to simulate a sound field that uses processing that is related to how our brains do it? If so, what type of processing should I be looking at? My main purpose is to see how the combination of a person + hearing technology + sound scene integrates in order to "accurately" (i.e. "results in the studio are predictive of performance in real life".) assess the combined performance in a way that is repeatable across people and technology, then use that information to adjust the parameters on the technology. As a lot of this technology is now making decisions based on spatial information (I'm not sure about distance, but certainly direction), it is important to surround a person (and the technology) with sound that is "close enough" to real life. Also, would there be enough spatial (even if it's only "interpreted") information in a TOA set up to convince the hearing technology to change it's directional microphone polar plot to reduce the loudest noise source? Michael mentioned possibly using "virtual microphones" for a localization test, rather than discrete speakers. Do you think such a test would be repeatable across different people using TOA, or from what you understand about brain vs AMB, does it be too variable or open to interpretation? >> Is it possible to record directly in TOA? > > Normally HOA is recorded by panning individual sources and adding > AMB encoded reverb or room acoustics. Does panning individual sources mean moving the microphone (in which case, you would be losing the transient nature of sound)? Or does it mean recording from several spots simultaneously (e.g. triangulating)? >From what you're saying, then, whilst it is theoretically and technically >better to record in HOA, achieving consistent good results are difficult? And >so up-sampling is generally considered the norm? > You need more than 16 speakers for full 3D third order. Good layouts > (without preferred directions or gaps) have 20-25 speakers. Does that mean constructing a dodecahedron with speakers positioned on the nodes? Is there a minimum room area size? > There are a number of algorithms that can produce > reasonable results, but none of them produce optimal decoders. Does hand-optimised decoding produce more optimal results? _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.