Dave Malham <dave.mal...@york.ac.uk> a écrit :

> Another idea: I would like to use my Kinect for head tracking to check
> > if a listener's head is outside the "ambisonic sweet spot", or to
> > adjust the decoding parameters according to the head position.
> >
> >
> This would be eminently do-able, I think, so long as
> 
> a) the latency was low enough
> b) only one listener.
> c) movement was small enough to not require a major resetting of the
> decode

The latency would be too high for real-time adjustment without audio
artifacts; the idea is to inform the listener about the optimal
listening position, using 3D visual feedback. It could also be used to
adjust the initial decoding parameters.

> Just need to adjust the delays and levels of the speakers to
> compensate. If You moved far enough for the subtended angles of the
> speakers to change significantly it might be different.  Don't ask me
> what a significant change is, though - that would need
> experimentation.

The adjustments would be in the form of a dynamically generated
config file for ambdec.

Marc

>     Dave
> 
> PS yes, I know changing the delays on the fly would cause Doppler but
> so does the listener moving - and in the opposite direction.
> 
> >
> >
> > Dave Malham <dave.mal...@york.ac.uk> a écrit :
> >
> > > Been there, done that - albeit with other sensing technologies
> > > like the earlier Polyhemus Tracker.  Then, of course, there was
> > > Jacques Poulin's Potentiometre d'Espace  which was used to project
> > > Schaeffer's musique concrete into space back in the early 50's -
> > > and even I am too young to have actually heard that!  What I have
> > > found is that the movement of sounds (particularly in towards the
> > > centre) just based on sonic perception - i.e. without any visual
> > > feedback - is very difficult to control properly because muscle
> > > memory is not good enough without a lot of rehearsal.
> > >
> > >    Dave
> > >
> > > On 5 May 2013 02:57, Iain Mott <m...@reverberant.com> wrote:
> > >
> > > > Em Sáb, 2013-05-04 às 17:46 -0400, Matthew Palmer escreveu:
> > > > > http://vimeo.com/65229978#at=5
> > > > >
> > > > > imagine using the oculus & a kinect to be able to assign 3
> > > > > directional information to sounds to make music, virtual
> > > > > speakers corresponding to
> > > > real
> > > > > ones, hand is a brush
> > > > > -------------- next part --------------
> > > > > An HTML attachment was scrubbed...
> > > > > URL: <
> > > >
> > https://mail.music.vt.edu/mailman/private/sursound/attachments/20130504/ce0d9048/attachment.html
> > > > >
> > > > > _______________________________________________
> > > > > Sursound mailing list
> > > > > Sursound@music.vt.edu
> > > > > https://mail.music.vt.edu/mailman/listinfo/sursound
> > > >
> > > >
> > > > _______________________________________________
> > > > Sursound mailing list
> > > > Sursound@music.vt.edu
> > > > https://mail.music.vt.edu/mailman/listinfo/sursound
> > > >
> > >
> > >
> > >
> >
> > _______________________________________________
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound
> >
> 
> 
> 

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to