Automatic Speech and Audio Processing (ASAP)
Human listeners excel at recognizing speech, even in adverse acoustic conditions (e.g., high noise levels or lots of reverberation). On the other hand, automatic speech recognition breaks down in many conditions that we can easily deal with. Our research is based on the link between our auditory system and speech processing. Some of the questions that we're looking into are:
- What characteristics does our auditory system have that is missing in speech recognizers? How can we implement them?
- Can speech algorithms be used to create better models of human speech intelligibility, so we can predict what the average listener will understand in a specific scene?
- Can our models help to understand the cortical processes in the human brain for speech understanding?
Answers to these questions could help to improve assistive devices for disabled people, to accelerate the development of hearing aid algorithms, and to better understand our active auditory system in general. Follow the links on the left side to learn more about specific projects.