I am evaluating the NT-SF1 vs the 350 Soundfield by listening to both
players with the same musical input recorded either by the 350 or the
NT-SF1. They were mounted on the same pole and separated by a foot. The 350
was at the top. 

The difficulty is trying to get a pickup pattern that generates a similar
soundfield with both microphones. The Harpex-X is far more generous with
output patterns; therefore, getting it to emulate the NT-SF1 is the goal. It
is also complicated by the NT-SF1 not being able to output in a Blumlein
Pair, my favorite listening pattern for this church/environment. Both
microphones output great/realistic/flat reproductions. Early stereo results
suggests more ambiance with Harpex and a brighter sound with the NT-SF1. The
recorded music is early-music a favorite in New England. It contains
soloists, a harpsichord, flutist, cellist, and violins.

Alvin Foster
____________________________________________________________________________
________
-----Original Message-----
From: Surround <sursound-boun...@music.vt.edu> On Behalf Of David Pickett
Sent: Tuesday, December 18, 2018 3:35 AM
To: Surround Sound discussion group <sursound@music.vt.edu>
Subject: Re: [Surround] Soundfield by Rode plugin

At 12:15 17-12-18, Politis Archontis wrote:

 >Another very sensible approach was presented by Cristoff Faller and
>Illusonics in the same conference, in a simpler adaptive filter is  >used
to align the microphone signals to the phase of one of the  >capsules,
making them again in essence coincident.

I am interested to know how this approach gets around the problem that with
any pair of capsules, the difference in phase is a function of the angle of
incidence and how it distinguishes between identical frequency components
that are actually generated by different acoustic sources.

It seems to me that some assumptions must be involved, e.g.:

-- that the microphone array is stationary during the time of the capture
window and subsequent computation.

-- that the filtered frequency components used for the computation contain
only signals from a unique direction.

The first of these assumptions has been mentioned here by reference to the
generation of spurious signals when the mic is in motion. The implication
would be that the time taken to acquire and compute the data is too long to
satisfy the condition.

The second is demonstrably untrue when we are talking about tonal music in a
reverberant acoustic. How does the system distinguish between a 1kHz partial
from a source or reflecting surface on one side of the array and one from a
different source or reflecting surface arriving from another direction? (A
moment's thought shows that a C major chord, distributed among various
performers would have a large number of Cs, Es, G, etc coming from all
around.)

David

_______________________________________________
Surround mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit
account or options, view archives and so on.

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to