[Sursound] Ambisonia?

2012-06-03 Thread Daniel Courville
What's the official state of Ambisonia? I added a torrent yesterday and it
seems to be working, but I don't recall an official announcement beside
the one that Dave made in York stating that it would be online soon.

Thanks,

Daniel


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Sursound Digest, Vol 47, Issue 2

2012-06-03 Thread Augustine Leudar
That would be an interesting experiment ! Though I think the fact that the
subsequent  decorrelating the sound of the cello in the different speakers
gave the composer the spatial effects he was looking for (it still sounded
like a cello) shows it was property of the way the different frequncy bands
were dispersed rather than a cognitive effect such as familiarity that was
the main factor .

Re Augustine?s post: Thanks for suggesting Gary Kendall?s paper. While it
> doesn?t provide a *complete* explanation (who can?), it is a good read. I
> proposed a somewhat similar study while a grad student, but the stimuli
> would have included speech, dynamical sounds (such as breaking glass or a
> bouncing ball), and unfamiliar sounds. The constituent components of the
> unfamiliar sounds would be spatially separated but have identical start
> times. We could then ask whether it?s familiarity (as with a cello),
> arrival times, or other variables that unify the separate sounds into a
> common source.
>
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120603/1a2791f3/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] B- Format to A-format conversion for an ambisonics fx

2012-06-03 Thread SERO SERO
Hello list,

This is my first post here so please be patient if I am doing something
wrong with the replies.

I am working on a patch in MAX 6 for Ambisonics reproduction for a jazz
band and I am looking in creating a series of FX that can be encoded in 3rd
order.

My main understanding is that for a delay FX type (and any other fx that
affects the phase of the signal) I need to convert the B-format signal to
A-format, then apply the fx I want and then reconvert to B-format.

Does anyone can point me to any resource I can use to understand both
conversions?
Also can you share a patch u have done in Max (if anyone use it) I can use
as a reference?

I use the ICST tools in Max 6 for all the ambisonics processing.

I am already using the trick of applying delay FX to a mono source before
enconding and then use rotation of the sound field to create ping pong like
fx but I would like to do something more creative.

Thank you very much

Regards,
Sero
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120603/5b53fa16/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] B- Format to A-format conversion for an ambisonics fx

2012-06-03 Thread Dave Malham
Hi Sero,
Although there are times that processing A format may be be useful
in compositional terms (see Jo Anderson's work) it isn't necessary to
change to A format for applying general effects like delay, eq or
gain, even if it involves phase shifts - so long as there are no
differences between channels in the B Format. So, if all channels have
the same delay, no matter what it is.  It is entirely possible to
provide a B format feedback path via delays which incorporate
rotations to increase the complexity of the final soundfield. Dylan
Menzies incorporated this in his LAmb software
(http://www.tech.dmu.ac.uk/~dylan/z/dylan/project/holog/index.html).
The same applies to pretty well any effect - if you apply a boost of,
say, +2.65dB at 5kHz to the W channel, you must  apply an identical
boost of+2.65dB at 5kHz to all the other channels.

Dave

On 3 June 2012 19:53, SERO SERO  wrote:
> Hello list,
>
> This is my first post here so please be patient if I am doing something
> wrong with the replies.
>
> I am working on a patch in MAX 6 for Ambisonics reproduction for a jazz
> band and I am looking in creating a series of FX that can be encoded in 3rd
> order.
>
> My main understanding is that for a delay FX type (and any other fx that
> affects the phase of the signal) I need to convert the B-format signal to
> A-format, then apply the fx I want and then reconvert to B-format.
>
> Does anyone can point me to any resource I can use to understand both
> conversions?
> Also can you share a patch u have done in Max (if anyone use it) I can use
> as a reference?
>
> I use the ICST tools in Max 6 for all the ambisonics processing.
>
> I am already using the trick of applying delay FX to a mono source before
> enconding and then use rotation of the sound field to create ping pong like
> fx but I would like to do something more creative.
>
> Thank you very much
>
> Regards,
> Sero
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <https://mail.music.vt.edu/mailman/private/sursound/attachments/20120603/5b53fa16/attachment.html>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound



-- 

These are my own views and may or may not be shared by my employer

Dave Malham
Music Research Centre
Department of Music
The University of York
Heslington
York YO10 5DD
UK
Phone 01904 322448
Fax     01904 322450
'Ambisonics - Component Imaging for Audio'
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] B- Format to A-format conversion for an ambisonics fx

2012-06-03 Thread Augustine Leudar
Hi Sero,
I use ICST quite a lot as well - I do as you say and just apply the
effects to the signals before sending them into ambipanning~
I'm not sure why converting to from B - format A - format will
necessarily allow you more creative possibilities - I'm not really
sure why you would do that,  A - format as I understand it is just the
raw recordings that come out of a microphone ?
You can see a patch Ive made here for spatialising audio in 3d with a
wii controller (an electroacoustic granular magic wand):

http://www.youtube.com/watch?v=7cmodvSM5jE

Id be happy to share knowledge patches etc ,
cheers,
Gus




On 03/06/2012, SERO SERO  wrote:
> Hello list,
>
> This is my first post here so please be patient if I am doing something
> wrong with the replies.
>
> I am working on a patch in MAX 6 for Ambisonics reproduction for a jazz
> band and I am looking in creating a series of FX that can be encoded in 3rd
> order.
>
> My main understanding is that for a delay FX type (and any other fx that
> affects the phase of the signal) I need to convert the B-format signal to
> A-format, then apply the fx I want and then reconvert to B-format.
>
> Does anyone can point me to any resource I can use to understand both
> conversions?
> Also can you share a patch u have done in Max (if anyone use it) I can use
> as a reference?
>
> I use the ICST tools in Max 6 for all the ambisonics processing.
>
> I am already using the trick of applying delay FX to a mono source before
> enconding and then use rotation of the sound field to create ping pong like
> fx but I would like to do something more creative.
>
> Thank you very much
>
> Regards,
> Sero
> -- next part --
> An HTML attachment was scrubbed...
> URL:
> <https://mail.music.vt.edu/mailman/private/sursound/attachments/20120603/5b53fa16/attachment.html>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] The Sound of Vision (Mirage-sonics?)

2012-06-03 Thread Robert Greene


Could I point out that in fact one does not
know what auditory reality is like for other
people whether or not they are hearing impaired?
One supposes it is similar. And structurally
it is similar--people tend to hear sound in the
same locations under given circumstances.
But literal sensation is entirely unknowable--
do you see the same color when you look at
green grass that I do? This is essentially unknowable.
One supposes so for convenience. But there is
no way to know because pure sensation cannot be
communicated.
This is something that will not change. More
and more evidence can be adduced to the effect
that the brain processes are similar. But there
cannot be proof that the experience is the same--
this is unverifiable from the scientific viewpoint
and always will be(in anything like our present
scientific world anyway). Thought and sensation
transfer in the literal sense is not around.
Of course there is always that movie("Strange Days")
Maybe someday. But right now, no one can know
what anyone else experiences except in some structural sense.
Robert

On Sat, 2 Jun 2012, Eric Carmichel wrote:


Greetings All,
I continue to learn a lot from this site (and someday hope to have something to 
give back). For now, however, I will briefly comment on posts by Umashankar 
Mantravadi, Augustine Leudar, and Entienne.

Entienne wrote the following [abridged]: **The argument essentially says that 
for something to appear real it has to fit people's *pre-conception* of what is 
real, rather than fit what actually is real. In other words, throw out 
veridicality (coincidence with reality); instead, try to satisfy people's 
belief of reality. This is another argument for questioning the extent to which 
physical modeling has the capacity to create illusions of reality in sound...**

Entienne made me consider further something of great importance re my CI 
research. Briefly, we really don?t know what auditory perception is like for 
hearing-impaired listeners (remembering that there?s a lot more to 
sensorineural hearing loss than threshold elevation). For example, does the 
Haas effect work for them? Why is noise-source segregation so difficult? Does 
breaking apart an auditory scene create greater dysfunction, or can they put 
the pieces back together to give the illusion of a unified sound source (as 
with the cello example)? How does multi-band compression sound for them, etc? 
We would most certainly like to know how altering a physical stimulus improves 
their belief of reality (thus improving their ability to communicate or enjoy 
music)? But how do we measure the perception of cochlear implant and hearing 
aid users other than providing *physically accurate, real-world* stimuli? Side 
note: Thanks for the reference to H. Wallach
(1940).

Re Augustine?s post: Thanks for suggesting Gary Kendall?s paper. While it 
doesn?t provide a *complete* explanation (who can?), it is a good read. I 
proposed a somewhat similar study while a grad student, but the stimuli would 
have included speech, dynamical sounds (such as breaking glass or a bouncing 
ball), and unfamiliar sounds. The constituent components of the unfamiliar 
sounds would be spatially separated but have identical start times. We could 
then ask whether it?s familiarity (as with a cello), arrival times, or other 
variables that unify the separate sounds into a common source.

Umashankar Mantravadi wrote the following: *As a location sound mixer, I 
exploited the visual reinforcement of sound in many situations. If you are 
recording half a dozen people speaking, and the camera focuses on one - 
provided the sound is in synch - the person in picture will sound louder, 
nearer the mic, than the others. It is a surprisingly strong effect, and one 
side benefit is you can check for synch very quickly using it.*

Many thanks for sharing this experience. I am currently creating AV stimuli 
(using a PowerPoint presentation as the metronome/teleprompter). While there is 
nothing new or novel about incorporating video, I am unaware of any 
investigations using cochlear implant patients? in a surround of uncorrelated 
background noise combined with a video of the talker(s). One could also study 
the effects of simulated cochlear implant hearing (using normal-hearing 
subjects) with visual cues in a *natural* environment.

It has been known for some time that lipreading is useful for comprehending speech 
presented in a background of noise. For example, Sumby & Pollack (1954) showed 
that visual cues could aid speech comprehension to the same degree as a 15 dB 
improvement in SNR. Sounds with acoustic features that are easily masked by white 
noise (for example, the voiceless consonants /k/ and /p/) are easy to discriminate 
visually.

There is a plethora of literature surrounding the benefits of lipreading. It is 
entirely possible that visual cues can affect more than just speech 
comprehension. A study showing the reduction of stress when a listener is aided

Re: [Sursound] The Sound of Vision

2012-06-03 Thread Eric Carmichel
e the perception of cochlear implant and 
> hearing aid users other than providing *physically accurate, real-world* 
> stimuli? Side note: Thanks for the reference to H. Wallach
> (1940).
>
> Re Augustine?s post: Thanks for suggesting Gary Kendall?s paper. While it 
> doesn?t provide a *complete* explanation (who can?), it is a good read. I 
> proposed a somewhat similar study while a grad student, but the stimuli would 
> have included speech, dynamical sounds (such as breaking glass or a bouncing 
> ball), and unfamiliar sounds. The constituent components of the unfamiliar 
> sounds would be spatially separated but have identical start times. We could 
> then ask whether it?s familiarity (as with a cello), arrival times, or other 
> variables that unify the separate sounds into a common source.
>
> Umashankar Mantravadi wrote the following: *As a location sound mixer, I 
> exploited the visual reinforcement of sound in many situations. If you are 
> recording half a dozen people speaking, and the camera focuses on one - 
> provided the sound is in synch - the person in picture will sound louder, 
> nearer the mic, than the others. It is a surprisingly strong effect, and one 
> side benefit is you can check for synch very quickly using it.*
>
> Many thanks for sharing this experience. I am currently creating AV stimuli 
> (using a PowerPoint presentation as the metronome/teleprompter). While there 
> is nothing new or novel about incorporating video, I am unaware of any 
> investigations using cochlear implant patients? in a surround of uncorrelated 
> background noise combined with a video of the talker(s). One could also study 
> the effects of simulated cochlear implant hearing (using normal-hearing 
> subjects) with visual cues in a *natural* environment.
>
> It has been known for some time that lipreading is useful for comprehending 
> speech presented in a background of noise. For example, Sumby & Pollack 
> (1954) showed that visual cues could aid speech comprehension to the same 
> degree as a 15 dB improvement in SNR. Sounds with acoustic features that are 
> easily masked by white noise (for example, the voiceless consonants /k/ and 
> /p/) are easy to discriminate visually.
>
> There is a plethora of literature surrounding the benefits of lipreading. It 
> is entirely possible that visual cues can affect more than just speech 
> comprehension. A study showing the reduction of stress when a listener is 
> aided by lipreading could be interesting: It is possible that visual cues, 
> regardless of speech comprehension advantages, could reduce listener stress 
> in a difficult listening environment. Capturing subtleties, such as talker 
> voice level as a function of background noise level, could make video and 
> audio stimuli more realistic. Although we might not be sensitive to these 
> subtleties when sufficient information is available to us (us = 
> normal-hearing listeners), hearing-impaired individuals might make use of 
> visual cues in ways that have not been explored. Systematic reduction or 
> elimination of available information can be accomplished when the stimulus 
> contains all of the *essential* information in the environment.
>
> I am exploring Ambisonics (along with video) as method of capturing the 
> essential information. At worst, I have a great-sounding system for listening 
> to others? musical recordings. Thanks to everyone for the recordings of crowd 
> sounds, music, software, and for sharing your wisdom.
> Best regards,
> Eric
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <https://mail.music.vt.edu/mailman/private/sursound/attachments/20120602/bb7c646d/attachment.html>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120603/9340a68c/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ecological flies (in the face of reality)

2012-06-03 Thread etienne deleflie
>
> However, I have found that a mixed mediated+ -non-mediated environment can
> contain significant local confusions. Years ago at York, I used an SF mic
> to record Dylan Menzies idly playing the piano, in a room equipped with a
> periphonic playback system. When I played it back, he then took to
> accompanying his own earlier playing. When he did so, the distinction
> between mediated and non- mediated instantly blurred. When he stopped the
> accompaniment, the recorded nature of the mediated environment became
> obvious.
>

That's very interesting. This is an almost identical observation to
something I've read described from 100 years ago!

Thomas Edison (inventor of the phonograph) used "tone tests" to convince
prospective buyers that his phonograph was 'indistinguishable' from the
real thing. He would get a real performer to sing in unison with the
phonograph ... and one of the rules that the singer had to follow was that
they were not allowed to sing, ever, without the accompaniment of the
phonograph. Because if they did, the difference would be immediately
obvious. (read this in ... Thompson 1995, p.152, Milner 2009, p.6) (let me
know if you want detailed reference).

I'd postulate that the difference has something to do with auditory stream
segregation ... by singing together the 'realness' of the real becomes
projected onto both sounds. Did Dylan Menzies also try to copy the
reproduced sound he heard? In Edison's tone tests, the performer had to try
to imitate the recording of themselves (the illusion of reality was created
by changing reality to meet the reproduction!)

Incidently ... there is support for the suggestion that the perception of
"reality" is a function of pre-conceptions rather than coincidence with
reality  in the New York Journal (1890) a reporter commented, upon
hearing Edison's phonograph, that he heard recordings ‘*rendered with so
startling and realistic effect that it seems almost impossible that the
human voice can issue from wax and iron*’

There are other similar reports.  You would think that with 100 years of
technology, we would have managed to shoot way past that calibre of
impression ... but have we? why not?

Etienne
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] The Sound of Vision

2012-06-03 Thread Dave Malham
Hi Eric,


On 4 June 2012 05:57, Eric Carmichel  wrote:
> Hi Robert,
> Thanks for the note. I remember a communication we had sometime back.
> I agree that we cannot know what reality is like for another person. 
> Fortunately, inferential statistics, or just a plain ol' consensus, helps 
> here. For example, most normal hearing listeners get the sense (or illusion) 
> of a phantom images when listening to a decent stereo system. There's 
> agreement to this despite not knowing what each individual's perception is 
> like. But the same *illusion* may not apply to the hearing impaired listener.

It does depend just what you mean by an "illusion".  Within limits,
some forms of stereo are not illusions - by that I mean that what is
presented to the ears (or the CI's) is close enough to physically
correct to not count (at least for me) as a true illusion. For a
listener in the sweetspot, looking forwards, Blumlein Stereo produces
wavefronts that have essentially the correct delays for ITD's to work
- in the range where they are appropriate, i.e. low to mid. Properly
implemented (i.e. matched HRTF's) headtracked binaural is even better,
except for the lower bass where body resonances play a part. At the
exact centre spot, Ambisonics recreates things fully - so is this an
illusion? In practise, possibly yes, as it doesn't do as well away
from that spot. WFS theoretically can recreate fully, even over an
area, but practical limitations mean that it doesn't. For me, the only
form of stereo that truly counts as illusory (albeit often highly
enjoyable) is pure spaced pair.

> But this isn't to say that hearing impaired individuals, or even those with a 
> profound unilateral hearing loss, can't localize/spatialize  > sound. So, in 
> this somewhat elementary example, we could surmise that stereo recordings 
> would be a poor way of bringing
> *physical reality* to the laboratory (unless it's stereo imaging techniques 
> we wish to evaluate).

Absolutely on the last!

>
> Whether a person is studying auditory processing disorders, hearing loss, 
> implantable prostheses, vision, etc. it would be most ideal to have a 
> controlled, real-world environment. If the physical variable is both 
> *real-world* (for external validity) and repeatable / portable across 
> laboratories (latter being easy to do with recordings and modern 
> electronics), then the perceptual consequence of changes made to a single 
> variable (e.g. a change in a CI processor's envelope detector) can be 
> determined with a certain degree of confidence. Naturally there will be 
> outliers, a range, and all the stuff you know about much better than I do, 
> but I believe a *physically real* periphonic system will yield much more 
> meaningful results than a two speaker system in a tiny audiometric test 
> booth. In the early days of CI testing this may have not been the case: 
> Simply getting a decent speech understanding score was an accomplishment! But 
> as processors and hearing aids
>  advance, I believe the test protocols will have to advance too. Just my 
> thoughts here.

Okay, so how about this? An anechoic environment, with standardised
HOA speaker array to handle the early reflections and reverberation,
which we can probably assume are more tolerant of deviations from
physical reality, with the direct sounds actually coming from
individual speakers - so exactly physically correct?

Dave

-- 

These are my own views and may or may not be shared by my employer

Dave Malham
Music Research Centre
Department of Music
The University of York
Heslington
York YO10 5DD
UK
Phone 01904 322448
Fax     01904 322450
'Ambisonics - Component Imaging for Audio'
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound