Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-14 Thread Martin Richards
I'm mostly just an interested reader on this group having an historical 
interest in Ambisonics from days in the 70's when I was a member of the Oxford 
Tape-Recording Society (which MAG and Peter Craven ran) and continue the 
interest making amateur recordings in B format.
Although I have very limited experience of binaural I do remember the one and 
only convincing binaural playback I've heard made in a demo at OUTRS using the 
original quad speakers either side of the audience. The only poor localisation 
was in front but height etc. very convincing. I don't remember the mic 
configuration though. Could this good result be because the soundfield is 
stable and head movements are used as would be the case for head-tracked 
headphones? But then why front localisation still poor? Be interested to know 
what people think.
Martin


  From: Richard Lee 
 To: "'sursound@music.vt.edu'"  
 Sent: Tuesday, 14 June 2016, 12:35
 Subject: Re: [Sursound] Using Ambisonic for a live streaming VR project
   
> The main mechanisms for disambiguating 'cones of confusion' (and this 
includes front-back reversals) are: pinnae effects (Batteau) and 
head-movements (Wallach) - so, without either of these mechanisms at play, 
one would expect directional ambiguity.

You can test the relative importance of these for YOURSELF with the famous 
Malham / Van-Gogh Experiment

http://www.ambisonia.com/Members/ricardo/PermAmbi.htm/#VanGogh.

I still have some Diamond encrusted caps with optional Golden Pinnae but 
you need to pay in used bank notes.  No Confederate money please.

Michael came up with his rE & rV theories ... not by considering how to 
best replicate HRTFs bla bla .. but by asking ... "what information could 
the Mk1 Human Head (+ torso + processing inside + bla bla) possibly have 
available to determine localisation?"

If youi perform the above experiment, you'll find the Moving Head  cues are 
FAR more important than the Fixed Head cues (HRTFs bla bla).

Where the HRTFs have the most significance is in the vertical plane.  It's 
the different frequency response as a source moves off the horizontal plane 
that allows the Mk1 HH to process 'height'.  But even then, Moving Head 
cues are far more unambiguous .. and don't require a priori knowledge of 
the source.

If the HRTF cues break down completely (eg simulating a pair of coincident 
back to back cardioids as the crudest possible binaural decode), simulating 
the Moving Head cues (head tracking) lets the Mk1 HH decode all this 
without any problem, fuss or discomfort.

> I would like a little more information on ?head movements?.  I suspect 
all head movements are being treated as equal, and I have a theory that 
short rapid movements (like shaking the head) should be treated separately 
from movements that include the shoulders, or even the whole body. Short 
rapid movements of the eyeball have been studied and are well understood; 
without these small movements the visual field collapses completely. Does 
something similar happen for the aural field ?

One of the more surprising things that Michael worked out is that the 
Moving Head localisation models gave the "same answers" regardless of 
whether they assumed you turned your whole body to face the source (eg 
Makita) .. or those that only allowed small involuntary head movements (eg 
Clark, Dutton & Vanderlyn IIRC)

It's all there in his "General Metatheory  " if you are prepared to 
study it and follow up the references.  See especially the 'stereo' 
appendix.

http://www.aes.org/e-lib/browse.cfm?elib=6827

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


  
------ next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160614/38fb93d9/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-14 Thread Dave Malham
You could try mounting them on rugby shoulder pads (
https://www.rugbystore.co.uk/equipment/protective/shoulder-pads) either
directly or via some kind of magnetic mount if you want to wear a T shirt
or something over the pads.

Dave

On 14 June 2016 at 04:14, umashankar manthravadi 
wrote:

> I don’t remember reading ‘general metatheory’ . I am downloading it now.
>
> I have printed a pair of four headphone arrays, designed to sit on the
> shoulder about six inches from the ears. I have not figured out how to make
> a comfortable grip so they stay there. I am building them so I can provide
> them a cube signal from first order B format, and let head movements
> determine localization. Just to see how far it will go.
>
> umashankar
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
> From: Richard Lee<mailto:rica...@justnet.com.au>
> Sent: Tuesday, June 14, 2016 7:07 AM
> To: 'sursound@music.vt.edu'<mailto:sursound@music.vt.edu>
> Subject: Re: [Sursound] Using Ambisonic for a live streaming VR project
>
> > The main mechanisms for disambiguating 'cones of confusion' (and this
> includes front-back reversals) are: pinnae effects (Batteau) and
> head-movements (Wallach) - so, without either of these mechanisms at play,
> one would expect directional ambiguity.
>
> You can test the relative importance of these for YOURSELF with the famous
> Malham / Van-Gogh Experiment
>
> http://www.ambisonia.com/Members/ricardo/PermAmbi.htm/#VanGogh.
>
> I still have some Diamond encrusted caps with optional Golden Pinnae but
> you need to pay in used bank notes.  No Confederate money please.
>
> Michael came up with his rE & rV theories ... not by considering how to
> best replicate HRTFs bla bla .. but by asking ... "what information could
> the Mk1 Human Head (+ torso + processing inside + bla bla) possibly have
> available to determine localisation?"
>
> If youi perform the above experiment, you'll find the Moving Head  cues are
> FAR more important than the Fixed Head cues (HRTFs bla bla).
>
> Where the HRTFs have the most significance is in the vertical plane.  It's
> the different frequency response as a source moves off the horizontal plane
> that allows the Mk1 HH to process 'height'.  But even then, Moving Head
> cues are far more unambiguous .. and don't require a priori knowledge of
> the source.
>
> If the HRTF cues break down completely (eg simulating a pair of coincident
> back to back cardioids as the crudest possible binaural decode), simulating
> the Moving Head cues (head tracking) lets the Mk1 HH decode all this
> without any problem, fuss or discomfort.
>
> > I would like a little more information on ?head movements?.  I suspect
> all head movements are being treated as equal, and I have a theory that
> short rapid movements (like shaking the head) should be treated separately
> from movements that include the shoulders, or even the whole body. Short
> rapid movements of the eyeball have been studied and are well understood;
> without these small movements the visual field collapses completely. Does
> something similar happen for the aural field ?
>
> One of the more surprising things that Michael worked out is that the
> Moving Head localisation models gave the "same answers" regardless of
> whether they assumed you turned your whole body to face the source (eg
> Makita) .. or those that only allowed small involuntary head movements (eg
> Clark, Dutton & Vanderlyn IIRC)
>
> It's all there in his "General Metatheory  " if you are prepared to
> study it and follow up the references.  See especially the 'stereo'
> appendix.
>
> http://www.aes.org/e-lib/browse.cfm?elib=6827
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160614/c8f2bbb1/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160614/014ee261/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Zoom H2n firmware update for spatial audio + surround podcast produciton

2016-06-14 Thread John Leonard
Got it,

Thanks,

John

Please note new email address & direct line phone number
email: j...@johnleonard.uk
phone +44 (0)20 3286 5942


> On 13 Jun 2016, at 18:12, Courville, Daniel  wrote:
> 
> John Leonard wrote:
>   
>> Is there a simple tool for decoding the H2n spatial material to horizontal 
>> surround without all the YouTube VR stuff?
> 
> It's B-Format.
> 
> At the output of the H2n in "spatial audio" mode, it's B-Format, but ambiX 
> flavor. Put the Matthias Kronlachner B-Format converter before your usual 
> decoder (Surround Zone? Harpex-B?) and you're good to go.
> 
> http://www.matthiaskronlachner.com/?p=2015
> 
> - Daniel
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.
> 
> __
> This email has been scanned by Netintelligence
> http://www.netintelligence.com/email
> 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.