Dear Albert, A nice proposal, using pure tones, but I don't think that will work. It isn't that it is CPU intensive, more that you just need to ask the right question...
While not exactly the solution you propose, we have worked on a method where the listener can select from a shortlist of HRTFs which have been shown to be quite representative of a large population from the LISTEN database: B. Katz and G. Parseihian, Perceptually based head-related transfer function database optimization, J. Acoust. Soc. Am., vol. 131, no. 2, pp. EL99EL105, 2012, doi:10.1121/1.3672641. We are working on some other methods, conceptually similar to what you propose, but they are not ready for public release yet... Some, for example, based on making some measurements on the dimensions of the ear. There are other teams, on the French BiLi project (bili-project.org) looking at other methods too. It is a hot topic right now. Welcome to the world of binaural! -Brian -- Brian FG Katz, Ph.D, HDR Research Director, Resp. Groupe Audio & Acoustique LIMSI, CNRS, Université Paris-Saclay Rue John von Neumann Campus Universitaire d'Orsay, Bât 508 91405 Orsay cedex France Phone. + 33 (0)1 69 85 80 67 - Fax. + 33 (0)1 69 85 80 88 http://www.limsi.fr web_group: https://www.limsi.fr/fr/recherche/aa web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/ ---------------------------------------------------------------------- Message: 1 Date: Tue, 29 Dec 2015 02:22:09 -0500 From: Albert Leusink <albertleus...@gmail.com> To: sursound@music.vt.edu Subject: [Sursound] HRTF optimization by using tones/noise? Message-ID: <f2b2d18c-2d65-4cb6-ab4b-06a21d940...@gmail.com> Content-Type: text/plain; charset=utf-8 Good evening, It?s been very informative reading this list and learning from all of you experts. I?m an experienced audio engineer that suddenly discovered Ambisonics due to the whole VR 360 explosion. (Although I have made some recordings with a Calrec MK4 in the mid nineties; we would just mix them down to stereo, not knowing what to do with these ?B-format? outputs, thinking that they were used by the ?B?BC only?shameful, I now realize?we were young?.:-) As I?m very new in this, so many questions - that even after reading this list thoroughly and other resources - remain unanswered and hopefully some of you can take the time to answer them. I?ll try to put them in separate threads so we can tackle the issues one by one, unless you prefer otherwise, let me know. Question 1: I?m understanding that a big variable re. localization in ambi to binaural decoding is picking the right HRTF. Now, is there a method whereby we could use test tones or pink/white noise to approximate the subject's HRTF and then use the closest measured HRTF from i.e. the IRCAM or CIPIC database? For example let?s say we use 100Hz, 1K, and 10K and the listener has to press a button on his device when he hears each tone exactly in the middle or exactly at -180 or otherwise. Or using regular and phase reversed tones and subject has to calibrate when they are the loudest or softest? Is this a ridiculous idea or does it have some standing? Would it be very CPU intensive or just a matter of supplying a spreadsheet with the IRCAM/CIPIC measurements and comparing the subject?s answers to that? Surely, it?s far from perfect, but what other solutions do we currently have to give binaural listeners the best possible outcome apart from getting themselves measured or them going through a whole list of HRTF?s ? Thanks ! Albert ------------------------------ Subject: Digest Footer _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound WHEN REPLYING EDIT THE SUBJECT LINE ALSO EDIT THE MESSAGE BODY ------------------------------ End of Sursound Digest, Vol 89, Issue 26 **************************************** _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.