On 12/15/2015 03:32 AM, Stefan Schreiber wrote:

In our discussion before we have found convincing evidence and arguments
that head motion should be relevant even to obtain improved vertical
localization.

Just to set the record straight again: evidence is not to be found on sursound. Evidence is found in the lab. :-]

Hat-tip to those who actually do the grunt work that forms the basis of sursound sermons.

If sound sources are immovable, their positions can't be determined
precisely, because the brain needs them moving (movement of the source
or subconscious micro-movements in the listener's head), which helps
to determine a sound source position in the geometrical space.

(?!)

Why quote such questionable statements?

Modern systems of reproduction of positioned 3D sound utilize HRTF
functions forming virtual sound sources, but these synthetic virtual
sources are spot. In the real life the sound mostly comes from large
sources or composite ones which can consist of several individual
sound generators. Large and composite sound sources allow for more
realistic effects in comparison with spot sources.
A spot source can be successfully applied to large but distant
objects, for example, a moving train. But in the real life when the
train is approaching the listener it's no more a spot source.

(See

One of our postgrads (Dan Peterson
<https://dxarts.washington.edu/people/daniel-peterson>) has been
working on
a doppler-panner that includes diffusion filtering and the proximity
effect.
)

These two are orthogonal. The first quote talks about sources being physically spread out (e.g. composed of multiple point sources along a line or area), while the second talks about what happens if a point source approaches the listener.

The third group consists of the sound tone parameters. This can help
the player define what the walls are made of, what is the air density
in the environment etc. Every material reflects and absorbs certain
frequencies. These parameters emulate such absorption and reflection.
They are relative frequencies (LF - Low Frequency and HF - High
Frequency) within which changes can be made. For example, metallic
walls reflect more frequencies than wooden ones, and the HF level will
be lower for them than for emulation of wood. For example, the
workshop has the following parameters: 362Hz LF and 3762 Hz HF; a
wooden room has the LF at 99 Hz and the HF at 4900 Hz. Finally, there
are parameters controlling the effect of Room LF and HF frequencies
(in dB). This subgroup also contains  Decay factor for LF and HF, and
Air Absorption HF factors.

This is games design, not acoustics.
Done properly, it should look something like http://www.audioborn.com/. A new company spun off from a research effort at ITA/RWTH Aachen, and their demo at ICSA 2015 was mighty sweet.

It is a safe bet that specifically AR/VR will require a solid
understanding of acoustics and human audio perception. They will have to
find improved ways to reproduce surround sound (including 3D audio) via
headphones and loudspeakers.

Thanks for pointing this out. :-]

--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT

http://stackingdwarves.net

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to