Theoretical Underpinnings of Statistical Processing of Complex Sounds
* Presenting author
To make sense of our auditory surroundings, the brain tracks sound sources as they evolve in time, collecting information over the recent context to construct a model of the external world. Two processes are at work here: 1) inferring the temporal window of relevant context, and 2) building efficient representations of contextual information. Though both of these processes are pervasive in everyday listening, the computational mechanisms behind them are poorly understood. We explore a perceptual model rooted in Bayesian theories of perception that incorporates both of these processes, inferring the relevant context window from the acoustic input and sequentially building statistical representations of sounds. Using experimental results from the laboratory, the model is used as a springboard to interpret processes of statistical tracking of the acoustic world and its role in facilitating perception of complex sounds.