Simon Connor wrote: ... > *First, is the video individual or projected? Is this HMD/GoogleCardboard > type situation? If the first, what are the audio/video sync latency > tolerances for the piece? * > > The video will be a single large scale projection rather than an individual > VR / Google cardboard headsets. The audio will be in sync with image. Ive > not had chance to test yet so not definite on tolerable latency but at a > guess up to a couple of miliseconds if possible. > > *Second, is the sound scene a "shared" experience, in that do the actions > of one person affect the sound heard by another? Is the scene dynamically > generated/rendered in real-time or is this basic off-line rendered audio > HOA playback? Single point of view in the sound scene, or does each > listener have their own view, and it is fixed or moving with the user's > position? * > > The sound scene is played back by off line rendered audio. A fixed/linear > 3OA audio stream, which I would then like to be decoded binaurally to each > users HP, for now this will be fixed rather than constantly moving. > > This audio stream would be the same for each each user - a binaural mix but > modified depending on their own unique head movements . So the data flow > would be; audio and video played in sync, audio sent potentially wirelessly > to approx 6 pairs of headphones with headtracker, each user's headtracker > sending data back to the individual binaural decoder, and then each user > receiving their own modified stream from this.
Maybe I am being silly, but would a much simpler solution not be to use speakers? The video is canned and projected. The soundfield is fixed relative to the video. Why all these ear muffs? Regards, Martin -- Martin J Leese E-mail: martin.leese stanfordalumni.org Web: http://members.tripod.com/martin_leese/ _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.