Real Life Hearing and Machine Learning: A Review



Author: James W. Martin, Jr, Au.D., Wendy Switalski, Au.D., MBA, and Jens Brehm Nielsen, Ph.D.

In the opening scene of the first in the Lord of the Rings movie series the movie an angelic voice speaks these words,” the world is changed. I feel it in the water. I feel it in the earth. I smell it in the air (Jackson, 2001).” I’d just read another book on machine learning, and I was pondering how this message speaks to what is happening in hearing aid manufacturing today. Technology has influenced almost every aspect of our lives from how we buy paper towels to allowing paralyzed individuals to walk again. An example of how technology has changed the automotive industry is how car manufacturers like BMW and Mercedes are pairing humans with machines to accomplish tasks that in the past would have been impossible when performed in isolation (Paul R. Daugherty, 2018). This man-machine collaboration has enabled flexible human-machine teams to work with an almost a symbiotic relationship, and these same advances are now becoming a part of the hearing amplification landscape for providers worldwide.

To take a step back, the 1980’s and 1990’s ushered in the modern digital hearing aid, and since then the world of hearing aids has changed significantly (Oliver Townend, 2018). Today, audiologists have access to remote controls, remote microphones, hearing aids that speak in multiple languages, hearing aids that connect to your smartphone or tablet, and hearing aids that play music to help patients manage their tinnitus.

Because of these advancements, we are pushing the envelope in what is possible with hearing aids and it has created a new era where we are moving research outside the lab/clinic to focus on real-life hearing. We realized that, unfortunately, having great technology in the lab/clinic isn’t enough. Even when audiologists utilize the right compression ratio, gain, sound pressure level, and signal to noise ratio, it doesn’t always meet the expectations of our patients when they are listening in the real world. We needed to look at what was happening outside the clinic. So, we have begun a journey to understand the preferences and intentions of our patients by incorporating auditory ecology into how we look at sound.

So, what is auditory ecology? Auditory ecology is the relationship between perceptual demands of people and their acoustic environments (Gatehouse, 2016). The listening intention of what our patients want to hear is incredibly important and often unrealized in the lab. How do we cross the threshold of incorporating this new technology into our clinics and practices?

We start by understanding that hearing technology is based on assumptions. These assumptions are incorporated into the hardware and software within the hearing aid. The assumptions dictate the hearing aid settings in order to optimize the patient’s hearing in challenging listening environments. Using this assumption-based approach, we can move our patients even closer to satisfaction. Even at this high level of performance, there will still be listening situations that continue to pose challenges for our patients. To help us predict and proactively address those environments, we often attempt to recreate these challenging listening conditions in our offices by setting up speakers that play an array of sampled/recorded environments. While this process allows us to put the patient in a simulated acoustic scene, we still do not understand the intentions of the patient in these environments.

What if we could give the patient the ability to improve their listening ability, based on their preferences and intentions in the listening environments where they are struggling? By doing this, we could allow for real-time customization. In addition, by sharing the information electronically through the cloud, we could learn from each situation and build better algorithms that could potentially help all patients in similar challenging listening environments across the planet.

So, what are the concerns we need to address clinically to move into this new evolution of technology? First, we need to acknowledge and understand that people still struggle to hear, even with the best technology, because their intentions in different environments may not align; with general assumptions. Second, we need to recognize the challenge with reporting perception; and acknowledge that if a patient cannot articulate what challenges they experience in their environments, it will be difficult for the audiologist to know what fine-tuning adjustments are needed. Finally, even when patients can articulate what they are experiencing, it can still be difficult for clinicians to know exactly what to adjust, given the all of the advanced software controls. These scenarios pose opportunities for the new technology within the framework of real-time machine learning found in some modern hearing aids.

As humans, we are constantly influenced by our environment. Exposure to unique environments creates experiences that change the way we learn to react to the world. Learning can be viewed as a change, which is influenced by previous experience. Learning is the key to how we adapt to different environments (John Paul Mueller, 2018). And this type of learning is now being accomplished by machines. Let’s look at a brief history of how we got to where we are now in artificial intelligence and machine learning.

In 1950, Alan Turing published a paper called “Computing Machinery and Intelligence” and proposed a test criterion for Artificial Intelligence (AI) that he called the Turing Test (Mueller, 2016). The goal of the test was to see if a computer could communicate so well that a human would fail to realize that they were not communicating with another human, but with a computer instead.



In 1956, John McCarthy coined the phrase ‘Artificial Intelligence’ at a Dartmouth academic conference, and a new field in computer science was born (John Paul Mueller, 2018). Artificial intelligence research flourished but was ineffective until around 1980. Then, between 1980 and 2000, integrated circuits transformed artificial intelligence, and machine learning from science fiction to science fact.

These technologies can be broken up into three categories:
  1. Artificial Intelligence
  2. Machine Learning
  3. Deep Learning


The goal of artificial intelligence is to make computers that mimic the way the human brain works. IBM has been exploring using the phase “cognitive thinking” instead of artificial intelligence, because it sounds less threating to consumers (Theobald, 2017). Machine learning is a tool that can surpass human intelligence in its speed and ability to suggest exact matches for listening situations in just a couple of seconds. No human can ascertain the myriad of listening options in an environment in such a short time. In essence, AI identifies the problems but machine learning and deep learning (aka neural networks) work together to find solutions for obstacles and challenges.

It is important to realize that just saying that a technology has artificial intelligence does not tell you anything specific about that technology or resolve challenges. As humans, we define intelligence in many ways. There nine different types of intelligence (Tri, 2018):
  1. Naturalistic Intelligence. Individuals with a green thumb who can grow anything.
  2. Musical Intelligence. Individuals who can recognize tone, rhythm, timber and pitch and are usually more aware of sound that others are simply not aware of. They can detect, reproduce, generate, compose music with ease.
  3. Logical-Mathematical Intelligence. Individuals who can carry out complex mathematical calculations and operations with ease.
  4. Existential Intelligence. Individuals who are deep thinkers and contemplate the “why’s” and “how’s” of life.
  5. Interpersonal Intelligence. Individuals who can understand and communicate with others well. They can sense the moods and temperament of others.
  6. Body-Kinesthetic Intelligence. Individuals who possess, and almost perfect sense of time and their body-mind coordination is excellent.
  7. Linguistic Intelligence. Individuals who can convey complex meanings and concepts and express them using language that is easy for others to understand.
  8. Intra-Personal Intelligence. -Individuals who know and are very aware of themselves, their thoughts, and their emotions, and also help others understand themselves better.
  9. Spatial Intelligence. Individuals who can essentially see things in three dimensions. These are usually artists, painters, and sculptors.
Machine learning is all about systems that can solve problems that, as yet, have not been solved satisfactorily. These systems can recognize complex patterns and make intelligent decisions, based on data. The advancements in machine learning technology have brought us cars that can tell the distance between other vehicles and can adjust their speed accordingly. These machine learning-assisted cars can stop themselves to avoid crashes, identify animals in the road and avoid them, and in some cases, even drive and navigate themselves with little interaction from the human driver.

Machine learning is already impacting our lives in ways we probably don’t realize. For example, your smart phone can be unlocked using facial recognition because it has learned to recognize your face. Netflix uses machine learning to track the movies that you watch and then recommends movies to you that fit similar criteria.

This same machine learning technology has influenced the strategies we are using in today’s hearing aid technology. For example, last year, Widex introduced the world’s first hearing aid, using machine learning to empower patients to make real-time adjustments based on their preferences and intentions in different environments. To achieve this new technological innovation, Widex takes advantage of the Evoke chipset that uses distributed computing that increases processing power by almost 30 percent, while still providing battery efficiency. This chipset enables faster assessments of acoustic environments to be achieved, in order to implement improved algorithms and strategies.

Using this distributed computing approach, along with 2.4 GBps Bluetooth processing, the power of the hearing aid and a smartphone can be leveraged to incorporate machine learning in real time. Information can then be shared in the cloud so that the algorithms can be continuously improved and sent to the hearing aid via firmware updates.

Combining machine learning with a simple user interface provides the patient the power to focus on improving the comfort and quality of the sound without having to manipulate numerous adjustment parameters. Using a simple A/B comparison through a smartphone application, the system can automatically learn and improve the patient’s listening comfort and quality of sounds experiences, based on the patient’s preferences and intentions.

Machine learning systems like this are now possible because smartphones have the processing power of laptop computers. That allows them to increase the available algorithms and theories that can be developed by machine scientists.

To better understand the power and application of machine learning in hearing aids, let’s turn our attention to how humans learn. There are four human learning modalities:
  1. Visual Learners. Individuals who learn best through demonstration. Sixty percent of people are visual learners. They take in information better if they see it.
  2. Auditory Learners. Individuals who learn best through listening and by modeling what is heard.
  3. Tactile Learners. Individuals who learn best when they take notes during lectures or when reading something new or difficult. They learn readily with hands-on activities, including writing notes during a lecture.
  4. Kinesthetic Learners. Individuals who learn best when they are involved--when they are doing, rather than just watching or listening (although they do learn from those modalities pulled together).
Now let’s look at the three mechanisms for machine learning.
  1. Supervised learning is guided by human observation and feedback, like an instructor teaching a student to play the piano or a musical instrument. The instructor makes sure that the student understands what the instrument is and how to use it. Over time, the student can expand what they have learned and continue to advance to more difficult musical challenges. Supervised learning is essence getting the system to learn to learn. Over time, the system takes learned information and uses it to continue learning.
  2. Unsupervised learning relies more on cluster data and modifying algorithms based on its initial findings, without any kind of external feedback from humans. This is like a student learning to play the piano by themselves. Eventually, they learn, but it may take them longer than if they were supervised by an instructor.
  3. Reinforced learning is established over time using trial and error. Video game developers use reinforced learning as they build their games. The game scenario is reinforced, and the player remembers it to advance further the next time. For example, if a player is learning to play the game and they are (virtually) walking in a maze, they may have three different directions that they can go. If they choose to go to the left and fall off a cliff, the next time they get to that point in the maze, they will have learned by trial and error to choose a direction other than left.
Machine learning can comprise one or all the above approaches. The incredible advantage about this technology is that it allows clinicians to collaborate with machines in unique ways that are now integrated into hearing aid technology. Audiologists will not be replaced by technology. However, providers who don’t understand, use, and incorporate technology into their practices will be outperformed by those who do.
Listening Intention
In real-life, a patient’s listening intention varies by the minute, depending on the auditory scenario at the time. Sometimes the patient focuses on overall sound quality. In other cases, they focus on listening comfort. They may focus on lowering the conversation level around them if they are sitting in a coffee shop. If they are listening to music, they may want to accentuate a particular element within the music. The ability to focus on achieving this auditory-related task or to experience an event with a certain level of fidelity is called the listening intention.

Using machine learning allows the hearing aid to achieve this auditory intention, because it is driven by the patient and their preference in that moment. The hearing aid learns to adapt quickly using its dynamic and incredibly advanced features. Hearing aid machine learning uses a technique for comparing settings and collecting user input using a form of paired A/B comparison. For example, if there are three parameters for frequencies (low, mid, and high), and each one of these parameters has 13 level adjustments that can be manipulated, we get a total of 2197 combinations of settings that can be manipulated. If we use that and add it to a paired comparison scenario, to sample the space in its full entirety, we get over 2 million possible comparisons--an impossible number of samplings for a patient to experience.



However, because machine learning integrates within this process, SoundSense learn can reach the same outcome in 20 steps in a short amount of time (Townsend & Jens Brehm Nielsen, 2018). The Widex Evoke has only just begun to harness the power, using the equalizer settings for immediate listening gratification, without altering the work and programming that the dedicated audiologists have put into the fitting. So, while the permanent programming of the hearing aid is not altered, the patient still has the power, in real time, to easily refine their acoustic need and intention.

An additional learning application, that Evoke offers, is the SoundSense Adapt feature. This feature learns from the adjustments that the patient makes in different listening environments via what we call the preference control. The preference control adjusts multiple settings to give the patient more audibility or comfort, instead of just making sounds louder like a traditional volume control. As new users adapt to their hearing technology, they can make multiple adjustments to teach SoundSense Adapt their preferences within a specific sound environment. SoundSense Adapt always ensures that, when a patient makes changes to increase or decrease the sensitivity of the devices, it is kept in a range that still provides audibility and comfort.

To test our machine learning applications, Widex conducted a research project, using 19 patients with mild-to-moderate hearing loss. The patients were put through a double-blind test, using nine different sound samples. Each patient was assigned an auditory intention focus task to optimize sound quality, or increase listening comfort, or speech intelligibility. The recording soundscapes included a hearing aid with no classifier, a hearing aid with active classifier, and a hearing aid with Sound Sense Learn.

We discovered that Sound Sense Adapt increased listener comfort and that listeners preferred the hearing aid parameters achieved by Sound Sense learn over the hearing aid with active classifier alone by as much as 84%. When evaluating sound quality using music samples, 89% of the participants preferred the settings based on Sound Sense learn.

The Widex study showed that SoundSense Learn helped patients increase sound quality and listening comfort in dynamic environments, based on individual auditory intentions. Ultimately – and most importantly, SoundSense learn helped patients in environments that previously were a challenge for them.
Three Aspects of Machine Learning


Modern hearing aid processing power is vast, compared to a decade ago. Integrating machine learning into hearing aids changes the way patients interact with their hearing aids and the realworld, providing them with opportunities for immediate personalized improvement based on their intentions and preferences in the moment, as well as long-term possibilities to make their hearing devices even smarter in the future.    
James W. Martin, Jr, Au.D. is the Director of Audiological Communication for Widex USA.

Wendy Switalski, Au.D., MBA is the Director of Professional Development for Widex USA.

Jens Brehm Nielsen, Ph.D. is Architect of Data Science & Machine Learning for Widex, A/S, Lynge, Denmark.
References
Gatehouse. (2016, June 13). Auditory Ecology and its contribution to Quality of Life, with emphasis on the Individual. Retrieved from Hearingirc.com.

Jackson, P. (Director). (2001). Lord of the Rings:The Fellowship of the Ring [Motion Picture].

John Paul Mueller, L. M. (2018). Artifical Intelligence for Dummies. New Jersey: John Wiley & Sons Inc.

Mueller, J. P. (2016). Machine Learning for dummies . New Jersey : John Wiley & Sons Inc.

Oliver Townend, J. B. (2018). Real-Life Applications of Machine Learning in Hearing Aids. Hearing Review.

Paul R. Daugherty, H. J. (2018). Human + Machine. Reimagining Work in the Age of AI. Boston, Massachusetts: Harvard Business Review Press.

Theobald, O. (2017). Machine Learning for Absolute Beginners . Scatterplot Press.

Townsend, O., & Jens Brehm Nielsen, P. a. (2018). Soundsense Learn - Listening Intention and Machine Learning. Hearing Review.

Tri. (2018). The Nine Different Types of Intelligence. Retrieved from Examinedexistence.com.