Rendering Environmental Noise Planning Models in Virtual Reality
* Presenting author
In building and infrastructure projects sound design and requirement specification are often complicated by difficulties in understanding how the planned built environment will sound. Information about sounds is almost exclusively provided as sound pressure levels and sound reduction indices. It is difficult to understand how an environment will be perceived solely based on such data. VR models with sound give an experience much easier to comprehend. In this study, VR models were developed based on first order Ambisonics recordings. Such recordings provide spatial information and can be real time rendered based on the listener’s orientation. However, the recordings must be made in discrete points and therefore a model for cross-fading and mixing was developed. Recordings of road and railway sounds were made in a two dimensional grid and mixed and crossfaded based on the position of the listener. The sound levels were adjusted to match calculated levels from noise planning models. The spatial density of the grid of recordings, the cross-fading function, and the mix of recordings were varied and the realism of the models was assessed in listening tests. The results give guidance on how Ambisonics recordings could be mixed in order to achieve realistic sound in VR.