1st prize for H4a research team in science competition
PRESS RELEASE
Researchers from the Cluster of Excellence Hearing4all win an international competition with intelligent software for hearing aids to better understand speech in noisy environments
A team of scientists from the University of Oldenburg has won first prize in the “Clarity Enhancement Challenge”, a competition organised by British universities to use machine learning to improve the performance of hearing aids. In tests with hearing-impaired test subjects, the solutions developed by Oldenburg researchers from the DFG-funded Cluster of Excellence Hearing4all achieved the best results. “Winning this top-class international competition is proof of the high quality of hearing research in Oldenburg,” says Professor Simon Doclo, head of the team and Director of the Department of Medical Physics and Acoustics at the University of Oldenburg. “This is very motivating for our further work, and we hope to be able to contribute to better care for people with hearing impairments.”
Everyone knows the situation: when talking in a restaurant, at a party or in a busy railway station, it can be difficult to understand each other because background noise masks the conversation. This situation is particularly challenging for people with hearing impairments. People with normal hearing can perceive sound precisely in space and localise a sound source exactly. This enables the brain to focus its attention on the sound source and “filter out” disturbing noises. Although the performance of hearing aids has steadily increased in recent years, filtering still does not work as well as it does for people with normal hearing. Improving speech intelligibility in noisy environments is therefore still one of the most important challenges in the development of hearing aids and hearing implants.
The aim of hearing research is now to programme the processors in hearing aids so that they are able to distinguish relevant sound sources from irrelevant ones and amplify or suppress the corresponding sound signals. Machine learning algorithms are increasingly being used for this, which are trained using large amounts of data and can recognise patterns and regularities in the recorded signal data on this basis. The key to the Oldenburg scientists’ success was the use of a binaural system that processes signals in both ears and can thus locate sound sources in the room. Interference signals and reverberation effects are reduced in a two-stage filter system using both traditional multi-channel signal processing and deep neural networks. In a simulated test situation with digital audio samples, this solution won over the 45 test listeners with hearing impairments – and prevailed against numerous competition entries from all over the world.








