Hi Sampo

Yes your philosophical meanderings are indeed some of my concerns about image - 
I am not convinced that in rendering a processed B-Format file that it would 
decode well in all output formats - Binaural, Stereo, 5.1 etc and I am not a 
terribly technical DSP person so running experiments to accurately check the 
phase is beyond me.  AS you mention, the W is a separate issue and I have 
thought as Joseph argues of taking a snapshot of the background noise at a 
quiet point (although all the recordings are quiet) for each capsule and then 
applying them in A-Format (tracks separated) before the B-Format conversion. In 
this case I am doing that only for the SPS200 for which I trust Soundfield 
provided me with the right decoding in their software, although I do often feel 
it is a few degrees off to the right, but thats another story.
I guess now that others have suggested it works for them I will apply RX to the 
A-Format tracks and see what happens.  It does seem strange that there is not a 
commercially available system (that I can afford of course - i.e.. not a System 
6000 solution) that does this automatically and guarantees the phase.

Cheers, Garth

On Aug 6, 2014, at 3:12 PM, Sampo Syreeni <de...@iki.fi> wrote:

> On 2014-08-06, Joseph Anderson wrote:
> 
>> I take the noise profile from each individual A-format channel...
> 
> At the risk of sounding trite, what is noise? I'd argue that it isn't one 
> thing, and that it's pretty difficult to define with mathematical precision. 
> If you're talking about environmental background, then approaches like gating 
> A-format or some other suitable directional representation of sound is a good 
> idea.
> 
> If you're talking about tape noise instead, that isn't directional at all, at 
> least until you get into directional masking calculations over what you can 
> throw away without getting caught. In that case you'd want to operationalise 
> what you consider noise, then find out an optimal way of extending that idea 
> to B-format, and do the kind of joint processing Eero suggests.
> 
> The easiest way probably is to go with just W in the sidechain and equal 
> gating for all the channels in the main one. The next step would be to do the 
> same per frequency, and so on. However, in the ambisonic world, you'll then 
> bump into a third source: the mic. Since the Soundfield works on differencing 
> principles, W has a totally different noise profile from XYZ, and typically 
> it only gets worse from there as the order goes up. (Or it doesn't; that 
> depends wholly on the mic geometry.)
> 
> The point is, I don't think there is a monolithic thing called "noise" which 
> can be just blindly "reduced". There never was even in monophonic recordings, 
> and the free degrees of freedom in your signal chain just multiply when you 
> go through stereo to ambisonic. So, you need to be careful about which 
> source(s) of unwanted hiss, distortion or bogus sources you're talking about, 
> you'll have to develop computationally tractable models of both your utility 
> signal and the noise, and only then can you really start to combine all of 
> the machinery into something which actually works/sounds good.
> 
> E.g. when you expand/limit A-format, implicitly your noise model is a hiss 
> which is directional to first order and your model of the utility signal is 
> something like a strong, wideband directional signal near it, which makes 
> directional sine-to-noise masking statistics relevant. Break those conditions 
> and bad things will most likely happen.
> 
> So, try your approach on a two sine test signal, separated in frequency more 
> than a critical band's worth. Pan one of the sines due front, and revolve the 
> other one around at about 1Hz and say -6dB. Then add pink noise at about 
> -10dB to each of the B-format channels independently. I'm rather sure that 
> while your approach will work beautifully for the front signal alone when 
> adjusted right, it'll lead to nasty, anisotropic noise pumping with the 
> dynamic signal in place.
> 
> (Oh, and by the way, which A-format? As long as you're dealing with a perfect 
> mic and linear, time-invariant filtering operation, you don't have to think 
> about that because you can go willy nilly between A and B. But once you start 
> applying this kind of processing, every possible orientation of the mic gives 
> rise to a separate A-format. Which one should it be? The above example 
> presumes one of the capsules is facing towards the reference. It gets much 
> worse if you place the source directly between three adjacent capsules, in 
> angle space.)
> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> _______________________________________________
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to