Medizinische Physik, Universität Oldenburg
Cocktail Parties and Hearing Aids: Neurosensory analysis of the auditory system and its applications
One of the most common complaints of hearing-impaired people (i.e., about 14% of our aging population!) is their "distorted" perception of sound and their inability to follow a conversation in a lively environment - the so-called "Cocktail-Party effect. Since even the most recent hearing aids only provide a limited benefit for these patients, a broad, multi-disciplinary research effort is required to advance our knowledge about the hearing process, its impairment and possible ways to overcome these problems. The basic concept in our research group therefore is to measure the properties of the normal and impaired hearing system both with psychophysical and physiological methods and to apply this knowledge to areas with important practical problems, such as, e.g.
- "optimum" diagnostics of hearing disorders using advanced computer-controlled methods,
- noise reduction in hearing aids using "intelligent" signal processing strategies that adapt to the respective acoustical situation,
- automatic speech recognition in noisy environments (comparable to the performance of the human ear) in order to allow the computer to "listen with human-like ears"!
Although the scope of this work includes aspects from medicine (i.e., ENT medicine, audiology), psychology (psychophysics, perception research), neurobiology (EEG recording and analysis), communication engineering, and computer science, the main approach to tackle both fundamental and applied issues in the working group's research work originates from physics: Based on the analysis of empirical data, a model of the underlying processes is made (such as, e.g. the "Oldenburg model" of the effective signal processing in the auditory system) which again is tested against real data. This interaction between experiment and model resembles the alliance between experimental and theoretical physics where most of the methods employed originate from.