Hi Richard, In the interests of avoiding too much repetition, I’ve edited the previous conversations.
By “current products” I meant hardware audio mixers and DSP processors with multiple inputs and outputs that are readily available, rather than what can be built using things like the Sharc DSP chips or cards. This is certainly beyond my capabilities, but seems to be what you and others are doing. Having looked at your website, I’ve been trying to understand your approach to this, and how it may agree with or differ from mine and others'. A large number of inputs needs to be mixed to a large number of outputs in a controlled way. A 32 channel in/out matrix mixer with independent amplitude, delay (and possibly other processing) at the crosspoints for each input to each output is indeed desirable. The inputs are mixed to the outputs in a way that depends on the spatial algorithm (ambisonics, Dolby Atmos, VBAP, DBAP, WFS etc.). This requires computer control as the number of instructions is large. If an input sound source needs to move spatially the instructions need to be smoothed, to avoid it jumping between positions, with a ramp or curve driven by time. All this could be built into a digital mixer, but incorporating a good user interface is far from trivial, and this is unlikely to happen as general demand for it is low, and it will be expensive. So, we end up with a separate DSP spatial audio engine (SAE for short) that sits between a large mixer or DAW sending many channels, and a large number of amplifiers and speakers. The connections are best done with a digital audio network (AVB, Dante, MADI or other). The mixer and DAW are used relatively normally, though mixes are to several “stems”, which are then "spatialised" , rather than to a stereo or 5.1 output. This avoids adding extra processing loads to the mixer or DAW. This SAE could be software in another, or even the same computer, though processing load and latency would be problematic. Less so when using a DAW (even with video) than with realtime events. I presume that each of your speakers receives the 32 output channels from the matrix and the channel it is using can be remotely selected. The DSP in the speaker is used to modify the response of the speaker (EQ, dynamic processing, delay, etc.), and that this too can be remotely controlled. The challenge then moves to linking the spatial audio engine to the DAW or to controls on the mixer. In the case of a DAW, this can be done using plug-ins that send messages (OSC or something similar) to the SAE, or the timeline to recall (with smoothing) memories of states in the SAE. In a mixer it would involve reallocation of controls for this purpose, or adding extra controls (e.g. a joystick or several). There seems to be a general consensus on this approach. I’ve looked again at the available MOTU products, which are excellent audio interfaces, but they alone do not seem to provide what is needed for an SAE. Unfortunately Covid has put a huge damper on progress in this direction, as large scale public events are untenable, and the world economy is being severely damaged. These are indeed “interesting times”. I wish you good luck with your products, though at this stage of my life I am unlikely ever to be able to use them. Ciao, Dave Hunt > 1. Re: DBADP (Richard Foss) > 2. Re: DBADP (Augustine Leudar) > > From: Richard Foss <rich...@immersivedsp.com> > Subject: Re: [Sursound] DBADP > Date: 15 November 2020 at 20:03:04 GMT > To: Surround Sound discussion group <sursound@music.vt.edu> > >> Current products do not allow progress to true Delta Stereophony (DBADP) > > > Well conceptually it should be possible if, beyond aux mixes, you have a > further layer of mixes that can comprise aux bus sends (with controllable > delays/filtering/volumes) as well as input channels. A possible problem is > not having sufficiently small delay increments, and not having smoothing > within the device. Anyway, its worth doing some experimentation! Implementing > DBAP or VBAP is fine. > >> DSP chips are now capable of providing it > > > Yes, there is a Sharc DSP in the miniDSP speakers we use, and a controllable > 32x2 matrix with delays/attenuation at the cross points. > > As you say, running Spat and a DAW is processor intensive. This was one of > the reasons we have turned to using the processors in current devices to do > the post-render mixing/delays. Having this capability in a speaker is great, > because your processing capability grows with each speaker. Having it in an > audio interface/mixing desk means that all the inputs - analog/usb/ADAT/… can > have spatialisation applied to them. > _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.