Virtual auditory scenes created by time reversal mirror technique
* Presenting author
The present paper describes the process of generating virtual auditory scenes exhibiting multiple virtual sources in different locations, which is accomplished through the application of the time reversal mirror (TRM) method. This technique, developed by Mathias Fink, can be used to focus an acoustic signal at a particular point in space. Time-reversing the transfer function between a TRM array and an acoustic source generates an acoustic spatio-temporal focus at said source’s original location. Thus, this time-reversed focus behaves as a “virtual” source in the outbound direction with respect to the TRM. Provided that an acoustic impulse is previously registered by the TRM device, a “virtual” audio source can be generated at the impulse’s location by convolving the TRM impulse response with an audio signal. Since the system is linear, it allows the addition of impulse responses belonging to different locations, which can be convolved with audio signals in order to shape the sound field of the auditory scene. The numerical simulations implemented to explore this method, located arbitrary audio signals in selected positions of an auditory scene. Their results were evaluated via comparison of the data from spatially localized sources against the virtual sources generated by the TRM technique.