Len Moskowitz wrote:

Jaunt VR has developed a virtual reality camera. They're using TetraMic for recording audio, decoding with headtracking for playback over headphones and speakers. For video playback they're using the Oculus Rift.

http://time.com/49228/jaunt-wants-to-help-hollywood-make-virtual-reality-movies/ http://gizmodo.com/meet-the-crazy-camera-that-could-make-movies-for-the-oc-1557318674


Citing from this link:

A close-up of the 3D microphone that allows for 3D spacialized audio. If you're wearing headphones, there's actually headtracking for the Oculus to tell which direction you're looking--when you change your view, the sound mix will also change to match, in order to keep the sound in the same space.


I have suggested this possibility before, for example here:

http://comments.gmane.org/gmane.comp.audio.sursound/5172

(obviously thinking of some < audio-only > application, without any video. It was already clear that the Oculus Rift included all necessary hardware for HT audio decoding, although Oculus didn't do this in 2012 or 2013.)

This suggestion led (by influence or coincidence) to some further developments, which could be followed on the sursound list:

http://comments.gmane.org/gmane.comp.audio.sursound/5387

To be frank: At least two "groups" of people on this list have demonstrated head-tracked decoding of FOA recently and < before > Jaunt VR, done in a very similar fashion. I could name Hector Centeno and Bo-Erik Sandholm (Bo-Erik introduced the external HT hardware, whereas the Android app by Hector already existed), further Matthias Kronlachner at IEM Graz. If not more people...

Far from complaining about this, I would welcome this coincidence or "coincidence". (The "VR movie" people and "our" list colleagues use basically the same HT decoder technology, and maybe even decoding software.) Because this all shows that Ambisonics is mature enough to be used even for some very sophisticated applications, if we speak about cinematic VR demonstrations... (We are all using "the power of HT decoded FOA, in VR worlds, VR movies, and maybe even for 3D audio music recordings"... ;-) )

Seeing the recent and ongoing development activities in areas like UHD, Mpeg-H 3D audio aka ISO/IEC 23008-3, gaming, VR, 3D movies and now "VR movies" (this is not a technical term yet), it is probably a good question why surround sound/3D audio is used in so many areas, but < still not > for (published) music recodings. (This situation looks increasingly < unbelievable >. )


Anyway: Congratulations to Len and TetraMic, who are involved in these activities!


Now, I have some suggestions for further improvement, to our colleagues and also Jaunt VR/TetraMic:

If the reference quality for HT binaural systems is about this

http://smyth-research.com/technology.html,

you would still have to employ personalized HRTFs (HRIR/BRIR) data sets into your decoder. (HRIR is anechoic. BRIR includes room acoustics.)

It is probably possible to calculate both HRIR and BRIR data sets from 3D scans, or even from "plain" photographs. (This has been done at least in the case of HRIR/HRTF data sets, derived from optical 3D scans or photographs of the torso/head/ear shapes. Probably there is still ample space to improve the existing methods to calculate HRIRs/HRTFs from optical data. For example, you could compare your calculation algorithm and corresponding real-world acoustical measurements, and follow some evolutionary improvement strategy. Matching calculation results and actual measurements closer and closer, after each algorithm generation. Just a quick idea...)

To calculate some (reverbant) BRIR data set (transfer function of some listener in a room), you could maybe apply some form of acoustical raytracing.

It would be far easier to < calculate > personalized HRIR/BRIR data sets than to measure them. (Because acoustical full-sphere measurements would require to measure hundreds or thousands of different positions, over a full or at least half 3D sphere.)


Beside the suggestion to investigate the use of individual HRIRs/HRTFs, I have a direct question to Jaunt VR:

What specific set of HRIRs/HRTFs (or BRIRs?) are you currently using as part of your Ambisonics --> head-tracked binaural decoder?

(I would imagine that you will have tested some existing collections, and chosen some specific set according to your listening results. Because you are using data sets and probably also software of other people/parties, I believe it would be fair enough to answer this question. )


Best regards,

Stefan

P.S.: If this is possible, I also would be curious to hear what HT update frequency you are using for the audio decoder, and maybe to ask some other questions.





http://www.engadget.com/2014/04/03/jaunt-vr/


Len Moskowitz (mosko...@core-sound.com)
Core Sound LLC
www.core-sound.com
Home of TetraMic

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20140515/9628bf0d/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to