Hello Michelle,

if you don't absolutely need to work with Ambisonic, then you could use our 3D 
Tune-In Toolkit test application (compatible with MacOS, Windows and Linux), 
which works with "direct" binaural (i.e. convolution with HRTFs) for the direct 
path, and with a virtual loudspeaker ambisonic-based rendering for the 
reverberation (through Binaural Room Impulse Responses - BRIRs). This "blended" 
approach allows you to have a very realistic simulation of distance (excellent 
in near-field), and a "true" binaural reverb (not just stereo).

I've made a quick video showing how the basic functions work (wear headphones 
when you listen)

https://imperialcollegelondon.box.com/s/t8pza3xazaqwpzh1gr5az1v1o8ulv5m9

You can run multiple instances of the app on MacOS, each outputting to a 
different channel pair on a multichannel audio interface, from which you could 
then stream (using for example radio transmitters/receivers) to the different 
headphones. The 3DTI Toolkit test application can be controlled via OSC (6DOF 
for the listener position and orientation, plus you can control individually 
the playback and position of each source). It might though be problematic to 
use the head-tracker of the Mobius HP, as I'm not sure you can receive data 
from multiple headphones on a single computer. One solution we have 
successfully tried in the past is to just use mobile phones as head-trackers, 
using apps such as GyrOSC, which can all be directed to the main computer, 
using different UDP ports for each different user/spatialisation.

We have developed an iOS 'wrapper' of the Toolkit, and a "prototype" iOS 
application which uses Apple ARKit and can allow the user to navigate with 5DOF 
(no up-down - Z) around a space. It works well if there are no 
objects/furniture in the room, and if the floor is not too dark. It runs only 
on new iPhones or iPad Pro, but...it's just a prototype at the moment.

If you want to simulate different rooms (e.g. different size, etc), you just 
need to import in the Toolkit the BRIR of the room you want to emulate (only 6 
positions are needed: 0°, 90°, 180°, 270° of azimuth at 0° elevation, and +-90° 
of elevation). You can either measure these using a dummy head, or simulate 
them using image-source, ray tracing, or other techniques. With the 3DTI 
Toolkit we have also released a "resources" package with a few BRIRs (and a few 
HRTFs).

You can download the 3DTI Toolkit test app from the GitHub repo:

https://github.com/3DTune-In/3dti_AudioToolkit/releases

You'll find the resources package in the "Common Resources" section.

I hope it helps!
Lorenzo


--
Dr Lorenzo Picinali
Senior Lecturer in Audio Experience Design
Director of Undergraduate Studies
Dyson School of Design Engineering
Imperial College London
Dyson Building
Imperial College Road
South Kensington, SW7 2DB, London
T: 0044 (0)20 7594 8158
E: l.picin...@imperial.ac.uk

http://www.imperial.ac.uk/people/l.picinali

www.imperial.ac.uk/design-engineering-school<http://www.imperial.ac.uk/design-engineering-school>
________________________________
From: Sursound <sursound-boun...@music.vt.edu> on behalf of Bo-Erik Sandholm 
<bosses...@gmail.com>
Sent: 29 September 2019 17:50
To: sursound <sursound@music.vt.edu>
Subject: Re: [Sursound] Ambisonic Audio - Interactive Installation

https://proximi.io/accurate-indoor-positioning-bluetooth-beacons/

Principles and solution för ble localization.

Den sön 29 sep. 2019 16:37Marc Lavallée <m...@hacklava.net> skrev:

> Hi Michelle,
>
> A master computer could remotely control portable computers (probably
> phones) to render the sounds using either a customized embedded version
> of the SSR software (http://spatialaudio.net/ssr/), or to play
> personalized binaural streams rendered in real time from the master
> computer using either SSR, Panoramix (from IRCAM) or some other software
> solution (as described in previous answers).
>
> Then there's the question of tracking... For interactive installations,
> the easiest I used was a Kinect (and now there's alternative products).
> The unknown part is how to link one portable computer to a detected
> person; maybe Bluetooth beacons and triangulation could be used to
> detect the computers and report their approximate positions.
>
> Marc
>
>
> Le 19-09-29 à 08 h 45, Daniel Rudrich a écrit :
> > Hi there,
> >
> > it’s indeed very cpu intense. Especially, as each source has to be
> encoded with their reflections.
> >
> > I wrote a VST plug-in called RoomEncoder, which renders a source in a
> shoebox-room, and adds reflections which are also filtered depending on the
> wall absorption and image source degree. Source and listener can be freely
> placed within the room, and the source can also have a directivity which
> can be frequency dependent. The output is higher order Ambisonics. So to be
> played back, all sources for one listener have to be summed up, rotated
> (head-tracking), and binaurally decoded.
> >
> > Several instances of the plug-in can be linked, so if you change the
> room properties in one of them, all of them change. The plug-in renders up
> to 236 reflections, however, a hand full (or two) of them are enough to
> give a convincing room impression. Especially when combined with a FDN
> network. The good thing is, that you’ll need only one FDN network for all
> listeners and sources, so at least this one is not so cpu demanding. The
> FdnReverb plug-in also has a fade-in feature, which helps to not get in the
> way of the early reflections of the RoomEncoder.
> >
> > We used this setup in an interactive audio game, tracked with an optical
> tracking system, logic implemented in PD and rendered with Reaper.
> >
> > Both plug-ins can be found in the IEM Plug-in Suite:
> https://plugins.iem.at <https://plugins.iem.at/>. As they are open-source
> you could compile them yourself, to get the most out of your CPU
> architecture you are using e.g. AVX512 SIMD extensions.
> >
> > Best
> > Daniel
> >
> >> Am 29.09.2019 um 14:06 schrieb Dave Hunt <davehuntau...@btinternet.com
> >:
> >>
> >> Hi Michelle,
> >>
> >> I believe that what you want to do is possible, but not easy.
> >>
> >> It is possible to move the listener in a totally synthetic ambisonic
> sound field. You have to build in “distance modelling’, as well as
> differing direction of each source, as the listener moves. Adding room
> simulation or reverberation brings an extra layer of complexity, as the
> nature of the reflected, delayed and diffused sound from each source is
> different at every listening position.
> >>
> >> This soon becomes rather processor intensive. I have made Max patches
> that take this approach, and they do “work”, but there are problems.
> Although basic ambisonic source encoding is mathematically relatively
> simple multiple sources, with multiple reflections, each of which have to
> be encoded, becomes appreciably more involved. Moving sources require
> multiple constant recalculation, preferably at near audio sampling rate.
> Even with just first order ambisonics, this gets pretty demanding to do
> well.
> >>
> >> For what you are proposing, you would have to deliver a unique audio
> stream to each pair of headphones, binaurally encoded to match the position
> of the headphones in the “room”. Thus you need spatial tracking of each
> pair, as well as the head movement data for each. This could control the
> ambisonic encoding and decoding to binaural of each individual headphone
> signal.
> >>
> >> For one listener this already involves a lot of data and processing,
> and at least two wireless transmission channels. For several listeners the
> technology and resources required becomes uneconomic. Currently you would
> probably require a computer for each listener, or possibly a very powerful
> computer or two. Then a lot of programming, engineering and material
> expense.
> >>
> >> Perhaps you would be better considering a loudspeaker based approach ??
> >>
> >> For this it may also be better to consider an amplitude/delay based
> approach (Delta stereophony, or basic wave field synthesis), rather than
> ambisonics.. This is what it appears TiMax, L-ISA, d&b’s Soundscape, and
> Astro and other similar systems are based on. Again, not easy to do well.
> Not perfect for everything, but then no algorithm is.
> >>
> >>
> >> Ciao,
> >>
> >> Dave Hunt
> >>
> >>
> >>> On 28 Sep 2019, at 17:00, sursound-requ...@music.vt.edu wrote
> >>>
> >>> From: Michelle Irving <michelle.irv...@soleilsound.com>
> >>> Subject: [Sursound] Ambisonic Audio - Interactive Installation
> >>> Date: 27 September 2019 at 18:22:26 BST
> >>> To: sursound@music.vt.edu
> >>>
> >>>
> >>> Hi,
> >>>
> >>> I'm working with an artist who wants to explore Ambisonic Audio
> >>> and use the Audeze Mobius headphones in an audio installation.
> >>> The soundscape will consist of recordings of various individual vocals
> >>> spatialized
> >>> throughout the "room". There is a video projection overhead. Hard sync
> is
> >>> not required.
> >>>
> >>> Questions:
> >>> 1.Is it possible to exploit the headtracking of the Mobius headphones
> to
> >>> give each person and individualized experience of the audio
> composition.
> >>> ie. Person A is in the far left front corner and hearing a particular
> voice
> >>> in close proximity while Person B is in the far back right corner
> barely
> >>> hearing what Person A is hearing?
> >>>
> >>> 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> Arduino
> >>> for configuring hte individual playbacks (mappings between headphones
> and
> >>> some sort of player)
> >>>
> >>> 3.I've looked at the Waves NX toolkit and I don't see a feature to
> >>> determine virtual room size?Am I missing something or is there other
> tech
> >>> that could allow me to map the headtracker to a specific roomsize?
> >>>
> >>> 4.Open to better ideas how to achieve an interactive Ambisonic audio
> >>> soundscape that works with multiple headsets.
> >>>
> >>> thanks!
> >>> Michelle
> >>>
> >>> --
> >>>
> >>> Michelle Irving
> >>>
> >>> Post-Audio Supervisor
> >>>
> >>> 416-500-1631
> >>>
> >>> 507 King St. East
> >>>
> >>> Toronto, Ontario
> >>>
> >>> www.soleilsound.com<http://www.soleilsound.com>
> >> _______________________________________________
> >> Sursound mailing list
> >> Sursound@music.vt.edu
> >> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here, edit account or options, view archives and so on.
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190929/306c80b8/attachment.html
> >
> > _______________________________________________
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> _______________________________________________
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20190929/d4b7d377/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20190929/4b9a8c83/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to