Hearing aid with AI and “telepathy”

| |
1 Star2 Stars3 Stars4 Stars5 Stars

Hearing aids are already marvels of technology. But in the future, scientists promise they will be almost “fantastic”.

“Vibrating air does not become sound until it encounters an ear,” explained the physicist and educator Georg Christoph Lichtenberg in the eighteenth century. But then as now, the ultrafine hair cells in the inner ear become markedly less responsive as we age. And good hearing is so important, because it involves far more than merely recognizing noises. As well as helping us play an active part in our lives and detect hazards, studies also show the possibility of markedly increasing our intelligence by tuning our hearing systems. It is thought that freeing up more “working memory” is responsible for this.

To a certain extent, hardware and software can now compensate quite well for our diminishing hearing capacity. The most important milestone for this was probably the development of what is called “binaural hearing” (ear-to-ear synchronization), which was awarded the German Future Prize in 2012. A spatial impression arises only through the interaction of both ears. Since each ear generally has its own hearing aid with its own individual setting, the two need to communicate using mathematical methods that automatically correlate the different hearing impressions – steam hammer from the right, speech from the left. This creates more “space” and makes speech easier to understand.

This last is also helped by directional microphones. These record speech signals primarily from the front and filter out ambient noise. For hearing aid wearers, the desired direction is usually whatever they are looking at. The latest solutions also orient towards people who are speaking – for example, other passengers in a car.

Intelligent hearing aid

One development that already sounds almost fantastic comes from Columbia University. It combines artificial intelligence with mind-reading. This enables people with hearing difficulties to participate in conversations with a number of people, even in a noisy environment. The device receives not only audio input from the various voices, but also neuronal signals by scanning its wearer’s brain. If they now focus their attention on one particular person, then only that person’s voice will be amplified.

This approach combines the methods of attention recognition and speech processing. As is already common, the breakthrough was achieved using deep neural networks (deep learning) to separate the speakers and decode the brain signals. The various voices in the input signal can thus be compared individually with the user’s neuronal signals. If one of the voice signals correlates with the measured brain pattern, then that voice is amplified.

Five years ago, measuring brain signals required invasive methods. Non-invasive attention detection was not achieved until in 2014 and used an “electrode cap”. But a further five years went by before a hearing device the size of a coffee bean could both read thoughts and separate and amplify individual voices. At least, that is the claim of the scientists at Columbia University. With this breakthrough, hearing aids should finally be able to discard their image as a prosthesis. After all, glasses have long been socially accepted as a fashion accessory.

Knowledge Base

Online demo http://naplab.ee.columbia.edu/nnaad.html

 

Hearing aid (Image: ReSound).

ReSound with a hearing aid app specifically for the Apple Watch. (Image: ReSound).