Audio-visual stimuli for the evaluation of speech-enhancing algorithms
* Presenting author
The benefit from speech-enhancing algorithms in hearing devices may depend not only on the acoustic environment, but also on the audio-visual perception of speech, e.g., when lip reading, and on other visual cues. In particular, the functioning of speech-enhancing algorithms depends on the motion behaviour of the user, which in turn depends on visual cues.In this presentation we present various audio-visual stimuli used for evaluation of speech-enhancing algorithms in hearing devices. The stimuli include video recordings of the Oldenburg sentence tests (OLSA), real-time animation of lip movement for animated characters, and complex audio-visual environments. We will discuss the effects of the material on speech perception and gaze and head motion behaviour, and outline applications of these stimuli.Results show that video recordings of the OLSA material are benefitial in speech reception thresholds. This benefit can not be found with the animated lip movement using a simple vocal tract model. However, it was found that the animations are sufficient to achieve natural motion behavior.This presentation is related to contributions at this conference of Llorach et al. on details of audio-visual speech test material, and Hendrikse et al. on natural motion behavior and the influence on hearing aid algorithm performance.