The aim of the collaborative research center "The Active Auditory System" of the university of Oldenburg, the university of Magdeburg and the Leibniz Institute for Neurobiology at Magdeburg is to understand the mechanisms underlying auditory stimulus selection under everyday real-life conditions in which we analyze acoustic scenes with ease and are able to separately listen to the sounds from one source in a mixture of sounds originating from various sources (e.g., as observed in the cocktail-party effect). How this separate perception of sounds from one source (i.e., auditory object formation) in complex acoustic scenes is achieved by the auditory system is not yet well understood.
Auditory object formation and scene analysis are driven by two types of processes:
- bottom-up processes in which the received physical signal drives the analysis of stimulus characteristics
- top-down processes in which the auditory system forms hypotheses based on the previous input that guide the subsequent stimulus evaluation.
The integrative approach of the contributing research groups investigates both the stimulus processing on the various levels of the auditory pathway as well as in the pathways that are activated in parallel in the auditory system and are specialized on different stimulus features. The mechanism of auditory object formation and scene analysis are studied by combining psychophysical and neurophysiological methods. This allows identification of the correlates between the neuronal representation of auditory objects and their perception. Models of processing in the auditory pathway allow the assessment of the validity and robustness of the proposed mechanisms. The results of the various approaches in the study of auditory scene analysis will be used to develop signal processing algorithms that are optimized for segregating sound sources in complex acoustic scenes and promote the separate analysis of sounds from specific sources. The research will help to construct better interfaces for the interaction between humans and computers using speech commands as well as to develop better processing mechanisms for hearing aids and cochlear implants that will provide patients with an improved ability to communicate and participate in social life.