Which physical characteristics distinguish a speech signal from a bird's song or from engine noise?
Can we separate concurrent streams of acoustic signals into the constituent objects?
How does the brain process information perceived by the sensory system?
Can we teach computers to mimic such processes and learn from observed data?
- Classification and machine learning: Recognition of speech and non-speech sounds
- Statistical signal processing: Blind source separation, noise reduction
- Biological signal modelling: Analysis of neuronal processes during perception and recognition of acoustic stimuli.
- Statistical learning methods
- Analysis of large data sets of high-dimensional natural signals