Temporal Context Invariance Reveals Neural Processing Timescales in Human Auditory Cortex
* Presenting author
Natural sounds like speech and music are structured at many timescales. But it remains unclear how these diverse timescales are cortically represented. Do processing timescales increase along the putative cortical hierarchy? Are there distinct processing timescales for speech or music? Is there hemispheric or anatomical specialization for processing particular timescales? Answering these questions has been challenging because there is no general method for estimating integration periods: the time window within which stimulus features alter the neural response. Here, we introduce a simple experimental paradigm (the “temporal context invariance” paradigm) for inferring the integration period of any time-varying response. We present sequences of natural sound segments in which the same segment occurs in two different contexts. We then measure how long the segments need to be for the response to become invariant to the context. By applying this paradigm to human electrocorticography data from epilepsy patients, we map neural processing throughout human auditory cortex. This map reveals a clear gradient in which integration periods grow substantially as one moves away from primary auditory cortex, providing support for hierarchical models. We also show that speech selectivity first emerges at timescales of approximately 300-500 ms, suggesting selectivity for syllabic or word structure.