It was an interesting presentation. The demonstration was not wholly 
convincing, I have to say — (a bongo drum sound that you could manually 
steer around the horizontal sound field) I think I have heard better 
surround from normal ambisonics. A claim for this system is that it is 
more convincing away from the “sweet spot”. Again, on the demo, I wasn’t 
particularly convinced, with notable collapse and fail-over from one 
speaker to the next especially from 90 degrees to 180 degrees. In 
fairness, it wasn’t a particularly easy demo environment, with a lot of 
people in the room. In a more purist test, it might well do better?

As I said, interesting and well worth attending, despite the limitations. 

And, my goodness, doesn’t Kings College have *dreadful* internal signage? 
Trying to get out turned into an episode of a maze game. 

jon




On 31/12/2014 17:52, "Aaron Heller" <hel...@ai.sri.com> wrote:

>On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham <dave.mal...@york.ac.uk> 
>wrote:
>>
>> I wonder how closely this is related to the paper he was one of the
>authors
>> of at the 2010 Ambisonics Symposium? Anyone have it handy?
>
>Here's the URL:
>  http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/P6_41.pdf
>
>Also an IEEE paper from 2013
>
> http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6508825
>
>This paper presents a systematic framework for the analysis and design of
>circular multichannel surround sound systems. Objective analysis based on
>the concept of active intensity fields shows that for stable rendition of
>monochromatic plane waves it is beneficial to render each such wave by no
>more than two channels. Based on that finding, we propose a methodology 
>for
>the design of circular microphone arrays, in the same configuration as the
>corresponding loudspeaker system, which aims to capture inter-channel time
>and intensity differences that ensure accurate rendition of the auditory
>perspective. The methodology is applicable to regular and irregular
>microphone/speaker layouts, and a wide range of microphone array radii,
>including the special case of coincident arrays which corresponds to
>intensity-based systems. Several design examples, involving first and
>higher-order microphones are presented. Results of formal listening tests
>suggest that the proposed design methodology achieves a performance
>comparable to prior art in the center of the loudspeaker array and a more
>graceful degradation away from the center.
>
>
>> > Le 31 déc. 2014 à 00:08, John Leonard <j...@johnleonard.co.uk> a 
>>écrit :
>> >
>> > > This looks interesting:
>> > >
>> > > Upcoming Lectures
>> > >
>> > > London: Tuesday 13th January
>> > >
>> > > Perceptual Sound Field Reconstruction and Coherent Synthesis
>> > >
>> > > Zoran Cvetkovic, Professor of Signal Processing at King's College
>London
>> > >
>> > > Imagine a group of fans cheering their team at the Olympics from a
>local
>> > pub, who want to feel transposed to the arena by experiencing a 
>>faithful
>> > and convincing auditory perspective of the scene they see on the 
>>screen.
>> > They hear the punch of the player kicking the ball and are immersed in
>the
>> > atmosphere as if they are watching from the sideline. Alternatively,
>> > imagine a small group of classical music aficionados following a
>broadcast
>> > from the Royal Opera at home, who want to have the experience of
>listening
>> > to it from best seats at the opera house. Imagine having finally a
>surround
>> > sound system with room simulators that actually sound like the spaces
>they
>> > are supposed to synthesise, or watching a 3D nature film in a home
>theatre
>> > where the sound closely follows the movements one sees on the screen.
>> > Imagine also a video game capable of providing a convincing dynamic
>> > auditory perspective that tracks a moving game player and responds to
>his
>> > actions, with virtual objects moving and acoustic environments 
>>changing.
>> > Finally, place all this in the context of visual technology that is
>moving
>> > firmly in the direction of "3D" capture and rendering, where enhanced
>> > spatial accuracy and detail are key features. In this talk we will
>present
>> > a technology that enables all these spatial sound applications using
>> > low-count multichannel systems.
>> > > This month's lecture is being held at King's College London, Nash
>> > Lecture Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm
>start.
>> > >
>> > > I'll be there if I can.
>> > >
>> > > John
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: 
><https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/b
>118ab2e/attachment.html>
>_______________________________________________
>Sursound mailing list
>Sursound@music.vt.edu
>https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, 
>edit account or options, view archives and so on.
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to