A world-first hearing aid, trained on 12 million real-life sound scenes to support how the brain naturally works, has been launched in New Zealand.
Thanks to groundbreaking artificial intelligence (AI), rather than focus on speech the Oticon More aid allows the wearer to hear more and helps the brain interpret and focus on what it needs to hear.
The device was developed after research revealed people with hearing loss need access to all sounds for their brains to make sense of their environment.
The hearing aid was awarded this week with a prestigious Consumer Electronics Association Award in the competitive Health & Wellness and Wearable Technologies categories. The international awards program annually selects the best of the best in consumer electronics.
Oticon More uses one of the most advanced technologies, a Deep Neural Network platform which has been trained using 12 million everyday-life sound scenes, collected in nature using a special 360-degree spherical microphone.
As a result, the hearing aid has learned to recognise all the varying types of sounds, their details and how they should ideally sound.
Oticon New Zealand National Sales Manager, Corey Ackerman said, “Most people think we hear with our ears, but our brains are our main tool for hearing”.
“This new hearing aid uses the Deep Neural Network to help the brain hear sound in the most natural and effective way.
“Traditional hearing aids block out surrounding sound, but Oticon More scans and analyses a sound scene at 500 times per second allowing the brain to process key sounds, such as someone else speaking or a bird chirping, even in a noisy, crowded environment.”
It is the world’s first hearing aid to support the brain to work in the most natural and effective way of taking in a full sound scene.
“We have long advocated the need to support the brain through hearing technology and as a consequence, we have pioneered many technologies that are already helping our users to lead fuller lives,” said Ackerman.
“We understand that it is essential to give the brain as much sound information as possible in order to hear properly, which our recent research revealed was the best way for the brain to handle sound.
“When you limit what you can hear to just a single person speaking, which most hearing aids do, your brain is forced to work harder in an unnatural way, and you can be cut off from other conversations around you.
“By helping the brain to process sound in the most natural way, we will better help reduce the health and life problems associated with untreated hearing loss.
“Hearing loss often forces people to avoid situations with too much noise, but Oticon’s progress in the use of AI is a quantum leap in creating natural, clear, complete and balanced sounds. We hope this advancing sonic technology will deliver greater freedom for many.”
The device, which can be linked to compatible smartphones, also allows users to directly stream music and phone calls into their ear and even connects to the TV and computers with the use of additional accessories.
It also has the ability to link in with household’s smart devices including Alexa, doorbells, lighting and even a kettle.
Compared with Oticon’s previous generation hearing aids it offers a clearer and more distinct contrast between sounds, something that conventional technology hasn’t previously been able to deliver.