Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread Peter Lennox
That reminds me - what's the highest order ambisonic-to-binaural encoding 
achieved so far, in combination with head-tracking (and with what latency)? - 
anyone know?
Cheers
ppl

Dr. Peter Lennox
Senior Lecturer in Perception
College of Arts
University of Derby, UK
e: p.len...@derby.ac.uk
t: 01332 593155
https://derby.academia.edu/peterlennox
https://www.researchgate.net/profile/Peter_Lennox

-Original Message-
From: Sursound [mailto:sursound-boun...@music.vt.edu] On Behalf Of Dave Hunt
Sent: 05 June 2016 18:14
To: sursound@music.vt.edu
Subject: Re: [Sursound] Using Ambisonic for a live streaming VR project

Hi,

It is hardly surprising that "I hear directional information and head tracking 
effects, but have never experienced the externalization and verisimilitude that 
direct dummy head or Algazi and Duda's motion- tracked binaural recordings can 
produce."

Even software direct binaural encoding seems more 'accurate' than B- Format 
ambisonics recoded to binaural. Then Algazi and Duda's system uses binaural 
recordings, and they know what they're doing.

Decorrelation, and software reverb,  can help with a sense of externalisation, 
though you can go too far.

Ciao,

Dave Hunt


> From: Aaron Heller 
> Date: 4 June 2016 20:53:09 BDT
> To: Surround Sound discussion group 
> Subject: Re: [Sursound] Using Ambisonic for a live streaming VR
> project
>
>
> My experience with FOA-to-binaural rendering is pretty much the same
> as
> what Acrhontis says.   I hear directional information and head
> tracking
> effects, but have never experienced the externalization and
> verisimilitude that direct dummy head or Algazi and Duda.'s
> motion-tracked binaural recordings can produce.
>
> Aaron (hel...@ai.sri.com)
> Menlo Park, CA
>
> On Sat, Jun 4, 2016 at 2:31 AM, Politis Archontis <
> archontis.poli...@aalto.fi> wrote:
>
>> Hi Jörn,
>>
>> On 03 Jun 2016, at 15:27, Jörn Nettingsmeier
>> mailto:netti...@stackingdwarves.net>>
>> wrote:
>>
>> Note however that while the quality of first-order to binaural is
>> quite good because the listener is by definition always in the sweet
>> spot, first-order over speakers can be difficult for multiple
>> listeners when they're far outside the center.
>>
>>
>> This is by no means meant to provoke, but I have never managed to
>> hear a convincing B-format to binaural rendering, or to produce one
>> myself. Could you possibly share some info on the decoding approach
>> that you used that results in a good example?
>>
>> In my experience, no matter how much tweaking in the decoding, there
>> is severe localization blur, due to the large inherent spreading of
>> directional sounds, and low envelopment due to the wrong (high)
>> coherence in the binaural signals with reverberant sound, compared to
>> the actual binaural coherence. And there is also serious colouration,
>> with a loss of high-frequencies, that seems direction-dependent. The
>> fact that everything is on the ideal sweet spot under free-field
>> conditions doesn't seem to improve much, it actually seems to do more
>> harm (I believe that a small amount of natural added decorrelation
>> from a room and tiny misalignments from speakers etc. seem to improve
>> binaural coherence and the perceptual quality somewhat of loudspeaker
>> B-format reproduction).
>>
>> Listening to a binaural rendering from a real B-format recording is
>> not so bad, there is no reference for comparison, but for VR the
>> difference I've heard between using directly HRTFs and B-format
>> rendering is huge.
>> And as
>> many of these applications rely on sharp directional rendering with
>> accurate localization of multiple sound events, traditional B-format
>> decoding seems unsuitable to me. The performance improves somewhat
>> with 2nd-order rendering, and significantly with 3rd and 4th-order
>> rendering.
>> Also it improves dramatically using plain B-format with a well-
>> implemented parametric active decoder, such as HARPEX or DirAC.
>> So my guidelines for VR till now have been,
>> a) if bandwidth is not an issue go for HOA rendering (at least 3rd-
>> order),
>> b) if it is an issue, like the streaming application of the OP,
>> stream B-format and use an active decoder at the client side.
>>
>> But I'd like to hear many opinions on this too, and any counter
>> examples!
>>
>> Regards,
>> Archontis Politis

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


The University of Derby has a published policy regarding email and reserves the 
right to monitor email traffic.
If you believe this was sent to you in error, please reply to the sender and 
let them know.

Key University contacts: http://www.d

Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread Politis Archontis
Hi Peter,

I have heard 7th-order real-time decoding with headtracking, and that’s from a 
real microphone array. There was no perceptible latency.

And I think the AmbiX plugins can handle the rotations and (N+1)^2 short 
convolutions for the same order without problem (head-tracking performance 
though inside a DAW can be laggy I guess..).

"Achieved so far” depends I guess, if you want to capture the full 
high-frequency variability of HRTFs, you need about 15th order HOA signals. We 
cannot record that high with any practical microphone array. But if 7th-order 
decoding has a very small imperceptible difference compared to 15th, or if 
parametric decoders can achieve the same result with the first few orders for 
95% of the sound scenes, then we don’t need to go that much. There is some 
literature that shows the maximum required order for HRTFs (from a physical 
perspective), and the effect of the order on the perceptual perspective.

On the other hand, for mixing of synthetic scenes a recent PC can probably 
handle the encoding to 256 HOA channels for 15th order, the matrix 
multiplications for rotations and the 256 short convolutions, using most of the 
computer’s resources, but I don’t believe that’s a smart way to go.

Best regards,
Archontis




> On 06 Jun 2016, at 11:49, Peter Lennox  wrote:
> 
> That reminds me - what's the highest order ambisonic-to-binaural encoding 
> achieved so far, in combination with head-tracking (and with what latency)? - 
> anyone know?
> Cheers
> ppl
> 
> Dr. Peter Lennox
> Senior Lecturer in Perception
> College of Arts
> University of Derby, UK
> e: p.len...@derby.ac.uk
> t: 01332 593155
> https://derby.academia.edu/peterlennox
> https://www.researchgate.net/profile/Peter_Lennox

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread Politis Archontis
Hi Dave,

just a clarification, Algazi & Duda’s system does not use binaural recordings, 
it approximates them using a spherical microphone array, not dissimilar to ones 
used in ambisonics. 
But instead of combining all microphones to generate the binaural directivities 
(as in ambisonics), it interpolates only between the two adjacent microphones 
that should be closest to the listener’s ears. Otherwise, it does not capture 
pinna cues or cues from a non-spherical/assymetrical head. 

To be honest, the ambisonic recording and decoding provides a better framework 
for imposing individualized or non-individulized HRTFs on the recordings (it is 
harder to do that in their method), but you need high-orders to capture the 
high-frequency cues.

Regards,
Archontis

> On 05 Jun 2016, at 20:14, Dave Hunt  wrote:
> 
> Hi,
> 
> It is hardly surprising that "I hear directional information and head 
> tracking effects, but have never experienced the externalization and 
> verisimilitude that direct dummy head or Algazi and Duda's motion-tracked 
> binaural recordings can produce."
> 
> Even software direct binaural encoding seems more 'accurate' than B-Format 
> ambisonics recoded to binaural. Then Algazi and Duda's system uses binaural 
> recordings, and they know what they're doing.
> 
> Decorrelation, and software reverb,  can help with a sense of 
> externalisation, though you can go too far.
> 
> Ciao,
> 
> Dave Hunt
> 
> 
>> From: Aaron Heller 
>> Date: 4 June 2016 20:53:09 BDT
>> To: Surround Sound discussion group 
>> Subject: Re: [Sursound] Using Ambisonic for a live streaming VR project
>> 
>> 
>> My experience with FOA-to-binaural rendering is pretty much the same as
>> what Acrhontis says.   I hear directional information and head tracking
>> effects, but have never experienced the externalization and verisimilitude
>> that direct dummy head or Algazi and Duda.'s motion-tracked binaural
>> recordings can produce.
>> 
>> Aaron (hel...@ai.sri.com)
>> Menlo Park, CA

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread len moskowitz
The folks at OSSIC claim to be able to decode B-format to binaural with 
a personalized HRTF. Their headphones measure - in a very short time - 
the response of your head and pinna. It does head tracking too.



http://www.ossic.com/


Len Moskowitz (mosko...@core-sound.com)
Core Sound LLC
www.core-sound.com
Home of TetraMic

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Brahma first experience

2016-06-06 Thread Courville, Daniel
Le 15-12-07 12:00, Emanuele Costantini a écrit :

>I would like to share with you my first experience with Ambisonics 
>technique through the Brahma microphone:
>
>http://tinyurl.com/z6pm5ur
>
>Any feedback is really welcome.

Hi Emanuele,

I gleaned at your PDF and I have two quick comments:

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Brahma first experience

2016-06-06 Thread Courville, Daniel
Please disregard the previous email...

Le 16-06-06 14:27, Courville, Daniel a écrit :

>Le 15-12-07 12:00, Emanuele Costantini a écrit :
>
>>I would like to share with you my first experience with Ambisonics 
>>technique through the Brahma microphone:
>>
>>http://tinyurl.com/z6pm5ur
>>
>>Any feedback is really welcome.
>
>Hi Emanuele,
>
>I gleaned at your PDF and I have two quick comments:
>

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Soundfield ST450 hire

2016-06-06 Thread Gareth Fry
Hi all,

I am looking to hire a couple of Soundfield ST450's in the UK from June 14th to 
21st? It's for a somewhat ambitious project that I can't talk about at the 
moment - NDA'd. I've already hired Richmond Film Services and Studiocare's 
ST450, but struggling to find any companies, or anyone else who has ST450's in 
stock.

Best regards,

Gareth Fry
Sound Designer

gareth...@hotmail.com | 07973 352669 | www.garethfry.com | www.theasd.uk 
Gareth Fry Ltd | Registered in UK, No. 09430786 | VAT No. GB 994736753

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160606/265cfe31/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread Stefan Schreiber

Aaron Heller wrote:


My experience with FOA-to-binaural rendering is pretty much the same as
what Acrhontis says.   I hear directional information and head tracking
effects, but have never experienced the externalization and verisimilitude
that direct dummy head 



"Direct dummy head" recordings are completely unfitting to be used in a 
"live streaming VR project", see thread topic.




or Algazi and Duda.'s motion-tracked binaural
recordings can produce.
 



These can be used:

http://dysonics.com/our-technology/

By taking head motion into account, MTB greatly reduces or even 
eliminates the issues and general limitations of traditional binaural 
recordings.



Some information about the "MTB mike":

http://dysonics.com/tag/rondomic/

http://dysonics.com/dysonics-and-telefunken-rock-aes-with-first-ever-vr-demos-combining-360-audio-with-360-video/

Indeed, it looks all very much like a 2D HOA microphone. (Sensing a SF 
via 8 capsules, positioned on a sphere circle. )


Best regards,

Stefan

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Using Ambisonic for a live streaming VR project

2016-06-06 Thread Stefan Schreiber

Politis Archontis wrote:

But instead of combining all microphones to generate the binaural directivities (as in ambisonics), it interpolates only between the two adjacent microphones that should be closest to the listener’s ears. Otherwise, it does not capture pinna cues or cues from a non-spherical/assymetrical head. 
 


Any source  for this explanation?

I actually dare to question your view... How will you receive any 
binaural cues via interpolation between two relatively closely spaced 
omni mikes (fixed on a sphere)?


As you even write, this doesn't seem to give any head and pinna cues. 
(It's called MTB. So I guess they would aim to provide several binaural 
perspectives, including head and pinna cues?)


Best,

Stefan




To be honest, the ambisonic recording and decoding provides a better framework 
for imposing individualized or non-individulized HRTFs on the recordings (it is 
harder to do that in their method), but you need high-orders to capture the 
high-frequency cues.

Regards,
Archontis

 


On 05 Jun 2016, at 20:14, Dave Hunt  wrote:

Hi,

It is hardly surprising that "I hear directional information and head tracking 
effects, but have never experienced the externalization and verisimilitude that direct 
dummy head or Algazi and Duda's motion-tracked binaural recordings can produce."

Even software direct binaural encoding seems more 'accurate' than B-Format 
ambisonics recoded to binaural. Then Algazi and Duda's system uses binaural 
recordings, and they know what they're doing.

Decorrelation, and software reverb,  can help with a sense of externalisation, 
though you can go too far.

Ciao,

Dave Hunt


   


From: Aaron Heller 
Date: 4 June 2016 20:53:09 BDT
To: Surround Sound discussion group 
Subject: Re: [Sursound] Using Ambisonic for a live streaming VR project


My experience with FOA-to-binaural rendering is pretty much the same as
what Acrhontis says.   I hear directional information and head tracking
effects, but have never experienced the externalization and verisimilitude
that direct dummy head or Algazi and Duda.'s motion-tracked binaural
recordings can produce.

Aaron (hel...@ai.sri.com)
Menlo Park, CA
 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.