Dear all, my name is Fabio Wanderley and I am doing my research exactly on the topic you are discussing. To implement what I am calling a 'Virtual Studio' I have been through all possible implementations of head tracking in ambisonic or binaural domain and beyond all this discussion about localization cues what I have to share are my findings on head tracker systems. Take a look on the MAX external developed by Dario Pizzamiglio, it has been very useful to me despite the limited useful range. It is a head tracking based on webcam and is performing very well in spite of being a bit dependent of the webcam response.

http://www.lim.dico.unimi.it/HiS

See you
Fabio Wanderley

On May 26 2011, sursound-requ...@music.vt.edu wrote:

Send Sursound mailing list submissions to
        sursound@music.vt.edu

To subscribe or unsubscribe via the World Wide Web, visit
        https://mail.music.vt.edu/mailman/listinfo/sursound
or, via email, send a message with subject or body 'help' to
        sursound-requ...@music.vt.edu

You can reach the person managing the list at
        sursound-ow...@music.vt.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Sursound digest..."


When replying, please remember to edit your Subject line to that of the original message you are replying to, so it is more specific than "Re: Contents of Sirsound-list digest..."


Today's Topics:

  1. Re: Sound Externalization Headphone (Hector Centeno)
  2. Re: Sound Externalization Headphone (Bo-Erik Sandholm)
  3. Re: Sound Externalization Headphone (Pierre Alexandre Tremblay)
  4. Re: Sound Externalization Headphone (J?rn Nettingsmeier)
  5. Re: Sound Externalization Headphone (Ralf R. Radermacher)
  6. Re: Sound Externalization Headphone (Pierre Alexandre Tremblay)
  7. Re: Sound Externalization Headphone (Marc Lavall?e)
  8. Re: Sound Externalization Headphone (jim moses)
  9. Re: Sound Externalization Headphone (Junfeng Li)
 10. Re: Sound Externalization Headphone (Hector Centeno)


----------------------------------------------------------------------

Message: 1
Date: Wed, 25 May 2011 17:15:11 -0400
From: Hector Centeno <i...@hcenteno.net>
Subject: Re: [Sursound] Sound Externalization Headphone
To: Surround Sound discussion group <sursound@music.vt.edu>
Message-ID: <BANLkTi=xatam9hmmzkecbs-71zu8+m8...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Exactly what I've been exploring using ambisonic recordings from a
tetrahedral mic. I've been decoding to fixed HRTFs corresponding to
virtual speakers in a cube configuration. Good to know who was doing
it and when was already being done. I also made a head-tracking sensor
using an accelerometer, gyroscope and magnetometer controlled by an
Arduino Pro Mini:

http://vimeo.com/22727528

Cheers,

Hector

On Wed, May 25, 2011 at 4:06 AM, Dave Malham <dave.mal...@york.ac.uk> wrote:


On 24/05/2011 20:00, f...@libero.it wrote:

<snip>

I should mention that interpolation of HRTF is not the only possible
technique; you can use for example a virtual loudspeaker array...

This is certainly the way that the Lake DSP system worked that they demonstrated way back in 1993 (I think it's in the papers for the London VR93 confence from that year but I don't have my copy of the proceedings hand). The sounds were recorded in (first order) Ambisonics and the head tracking drove a rotate/tilt algorithme that fed a decoder to virtual speakers the signals from which were convolved with fixed hrtf's corresponding to the speakers' positions that were fixed wrt the head, mixed together and fed to the headphones.


? ? ? ? ? ? ? ? ?Dave


------------------------------

Message: 2 Date: Thu, 26 May 2011 09:11:16 +0200 From: Bo-Erik Sandholm <bo-erik.sandh...@ericsson.com> Subject: Re: [Sursound] Sound Externalization Headphone To: Surround Sound discussion group <sursound@music.vt.edu> Message-ID: <e023323b1ad21d44af70273b35e7501502c5e43...@esesscms0356.eemea.ericsson.se>
        
Content-Type: text/plain; charset="iso-8859-1"


Hi
You seem to have made a very advanced head tracking device,
Did you ever consider to use wii for head tracking?
http://rpavlik.github.com/wiimote-head-tracker-gui/
Wii headtracking in VR Juggler through VRPN
http://www.xs4all.nl/~wognum/wii/

http://www.wiimoteproject.com/

Regards
Bo-Erik




-----Original Message----- From: sursound-boun...@music.vt.edu [mailto:sursound-boun...@music.vt.edu] On Behalf Of Hector Centeno Sent: den 25 maj 2011 23:15 To: Surround Sound discussion group Subject: Re: [Sursound] Sound Externalization Headphone

Exactly what I've been exploring using ambisonic recordings from a tetrahedral mic. I've been decoding to fixed HRTFs corresponding to virtual speakers in a cube configuration. Good to know who was doing it and when was already being done. I also made a head-tracking sensor using an accelerometer, gyroscope and magnetometer controlled by an Arduino Pro Mini:

http://vimeo.com/22727528

Cheers,

Hector

On Wed, May 25, 2011 at 4:06 AM, Dave Malham <dave.mal...@york.ac.uk> wrote:


On 24/05/2011 20:00, f...@libero.it wrote:

<snip>

I should mention that interpolation of HRTF is not the only possible technique; you can use for example a virtual loudspeaker array...

This is certainly the way that the Lake DSP system worked that they demonstrated way back in 1993 (I think it's in the papers for the London VR93 confence from that year but I don't have my copy of the proceedings hand). The sounds were recorded in (first order) Ambisonics and the head tracking drove a rotate/tilt algorithme that fed a decoder to virtual speakers the signals from which were convolved with fixed hrtf's corresponding to the speakers' positions that were fixed wrt the head, mixed together and fed to the headphones.


? ? ? ? ? ? ? ? ?Dave
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


------------------------------

Message: 3
Date: Thu, 26 May 2011 11:00:12 +0200
From: Pierre Alexandre Tremblay <tremb...@gmail.com>
Subject: Re: [Sursound] Sound Externalization Headphone
To: Surround Sound discussion group <sursound@music.vt.edu>
Message-ID: <e7ad28d3-ca72-49d8-93ac-d749db2c1...@gmail.com>
Content-Type: text/plain; charset=us-ascii

so what happens is you take the positions of the speakers you want to simulate, and convolve each speaker signal with the appropriate head-related transfer functions for the left and right ear.

Can I bring a concern here? I have compared different IRs of 5.1 setup with real, mastered 5.1 programme and the loss was very significant, mainly in term of comb filtering type artifacts. So much so that I decided not to include them on my album (this was for a 5.1 release which I wanted to have the stereo reduction to be binaural)

Now I know that using generic HRTF is not going to help me, but this was far worse than anything I could imagine...

I presume that if I had used Ambisonic as my spacialisation device throughout the mix and composition, it would have worked better as a re-rendering...

my 2 cents

pa

------------------------------

Message: 4
Date: Thu, 26 May 2011 11:08:19 +0200
From: J?rn Nettingsmeier  <netti...@stackingdwarves.net>
Subject: Re: [Sursound] Sound Externalization Headphone
To: sursound@music.vt.edu
Message-ID: <4dde1883.2030...@stackingdwarves.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 05/26/2011 11:00 AM, Pierre Alexandre Tremblay wrote:
so what happens is you take the positions of the speakers you want
to simulate, and convolve each speaker signal with the appropriate
head-related transfer functions for the left and right ear.

Can I bring a concern here?  I have compared different IRs of 5.1
setup with real, mastered 5.1 programme and the loss was very
significant, mainly in term of comb filtering type artifacts. So much
so that I decided not to include them on my album (this was for a 5.1
release which I wanted to have the stereo reduction to be binaural)

hmm. interesting. can you share a short snippet of the original 5.1 and the binaural rendering, one that shows those artefacts?

I presume that if I had used Ambisonic as my spacialisation device
throughout the mix and composition, it would have worked better as a
re-rendering...

why should it? the virtual speaker approach is pretty much independent of your speaker spatialisation technique.

*.*

btw, since nobody has mentioned it in this thread yet, there is the soundscape renderer from tu berlin/telekom labs. it is available as open-source code and has just seen a new release:
http://www.tu-berlin.de/?id=ssr

there's also a wealth of papers out there describing its workings and applications in listening tests and other studies.

best,


j?rn




_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to