Using Real-life Data to Improve Real-life Hearing



Author: Laura Winther Balling, Ph.D. and Oliver Townend, BSc

Recent years have seen so much talk about the power of data that it has almost become a cliché. A range of products within the hearing aid industry are discussed with reference to their use of data and artificial intelligence, without this necessarily having a connection to the end-users’ real hearing lives. By contrast, what we will discuss here is how data from the real-life fitting and use of hearing aids contribute to understanding the hearing lives of real users, the work of the audiologist, and the development of better, more intelligent hearing aids.

In this article, we show and discuss how the secure and responsible use of data, from hearing aid end-users’ real lives, play into the end-users’ hearing outcomes. We will also show how data have led, and will lead, to the development of modern hearing aid features, with a focus on real-life applications. We will discuss the history of learning from data from hearing aid fitting software and some trends in the data being generated through end-users’ use of hearing aid apps. Along the way, we will address the balance between end-user privacy and improvement of the hearing solutions and finally peek at the future of data in the hearing aid industry.

The History of Learning from Data
For many years, individual audiograms and hearing aid settings have been recorded, initially on paper, and kept at the clinical level. Sometimes, end-users would keep a written hearing diary of their daily use, including problems they faced with hearing. This could be used with their audiologist to assist in rehabilitation with amplification. The quality of these data and how they were used varied, but this is an early example where real-life data could assist hearing rehabilitation. When datalogging capabilities were achieved by the hearing aids themselves, it was a leap into the future, compared to what had come before.

Datalogging is a recording function that resides inside the hearing aid, recording multiple statistics about hearing aid use in the real world. This datalogging function was initially as simple as collecting statistics on ‘hours of use’ and ‘time spent in each program’. The mid-2000s brought advancements in the datalogging function, for the first time recording both long-term and short-term data. The long-term datalogging showed, in addition to statistics on time of use, the percentage of time the wearer had been in a particular listening environment.
Figure 1. Pie chart showing percentage of time end-user spent in each environment, as classified by the hearing aid (from Widex Compass GPS fitting software).


Hearing aids such as the Widex Inteo, for example, could also make a short data recording of the external environment. This feature was able to capture acoustic information, possibly when the end-user was having difficulties, to help the audiologist understand what was happening and, in turn, counsel or make hearing aid adjustments (Kuk & Bulow, 2007). Now, objective real-life data could be used in the clinic to assist and improve fitting and rehabilitation. Today, datalogging continues to facilitate analysis of listening environments and to link usage patterns to sound classes the end-user spends time in, to enrich clinical decisions and rehabilitation.

As is common with apps on your PC or phone, anonymous usage data are gathered and shared with the software developer to assist in bug fixes and improvements. These data contain no personal data from either end-users or audiologists. These data assist in making design improvements and fixes to the software to continually provide incrementally better products. Hearing aid manufacturers use data throughout the whole product development process, from defining a feature to designing it and improving it after it has been released to market. When a feature is considered for an upgrade, data are gathered to assess how the feature is used, and to identify where changes could be made to create a better user experience. One example involves changes to personal program-saving in the Widex EVOKE app. Data analysis identified that end-users were finding it difficult to save a personal program, so the design was changed to make it more user friendly.
Future Uses of Compass GPS Data and Data Consent
Besides anonymous data about how GPS is used, other data could be useful for developing future hearing aid technologies. However, these data points may not always be completely anonymous, and therefore, consent must be given to share them. The usefulness of data should always be weighed against people’s right to privacy. The General Data Protection Regulation (GDPR) is the most important change in data privacy regulation in 20 years (European Commission, 2019). While it is an EU regulation, its reach is felt worldwide, and it must be followed by any organization operating within the EU or with EU citizens. GDPR makes the rules very clear on consent to gather and use data, the right to access data and the right for data to be forgotten. GDPR applies to any data that can be identified as belonging to an individual where that individual can be linked to the data. Even though a hearing aid manufacturer does not know an individual’s name or date of birth, it is sometimes possible that data collected could be identifiable. For example, Widex believes an audiogram is a fingerprint for the ear and could therefore be identifiable. In order to protect the individual’s privacy, while enabling them to share data with Widex, a secure and encrypted data exchange was set up, and Widex introduced an additional data consent stage in GPS.

Many people enjoy the feeling of giving something to help others: financial donations to a charity, volunteering time, or donating blood. Similarly, hearing aid manufacturers are often contacted by end-users who would like to share their experiences to help others, through feedback on their products and by participating in research. Consenting to share fitting data from GPS is another way that end-users can give data to help others. To ensure that data are always secure and protected, most hearing aid manufacturers maintain high standards in the security and encryption of data, and only allow access to a select few employees, with specific tasks related to these data. Data are pseudonymized and withdrawal of consent is possible at any time. One example of how collective data have been used to give back improvements to users of Widex’s products is from the machine learning feature, called SoundSense Learn.
Data From Real-Time Machine Learning
Most hearing aid manufacturers use machine learning during development. However, it is difficult to provide one broad-based example of the use of machine learning, because each manufacturer implements machine learning in a different way. Therefore, we will focus on one implementation strategy. SoundSense Learn (SSL) is a feature in the EVOKE app, which uses real-time machine learning to allow end users to adjust their hearing aids. The anonymous usage data from SSL may be enriched if the end-user consents to linking the EVOKE app data about their personal programs to their hearing aid fitting session in Widex Compass GPS. Real-Life Insights (RLI), discussed later in more detail, shares consented data from the end-user’s EVOKE app with their audiologist back in the clinic.

SoundSense Learn enables end-users to adjust their hearing aid sound in situations where they are not entirely satisfied with the automatic settings in the hearing aid. Such situations arise because, although modern hearing aids adjust to the acoustic environment in an intelligent way, they cannot always predict the end-user’s specific intention in a specific situation, making personal adjustments relevant. SoundSense Learn uses a machine-learning algorithm that asks end-users to listen to a series of pairwise A-B comparisons of different gain settings, adjusted via three bands, to uncover the desired settings in a given listening situation. When the settings are found, they can be used in the moment, and may be saved as personal programs for future use in the same or similar environments.

Here, data come into play in many ways: SoundSense Learn was developed based on data (Nielsen et al. 2014). SSL operates using data in the form of responses from the end-user.

Finally, SoundSense Learn generates data itself, including: the final settings, usage, situations, and intentions associated with the individual SoundSense Learn program. These data are of interest to researchers, tasked with improving the products, and to audiologists who want to improve patient satisfaction. It is this final aspect that we will discuss further. Widex EVOKE with SoundSense Learn was introduced in the spring of 2018. In the fall of 2018, company researchers were able to explore the gain settings and usage of SoundSense Learn programs that end-users created in the EVOKE app (Balling & Townend 2018). The lack of patterns or clusters of settings (Fig. 2) indicates that end-users need a sophisticated tool, like SoundSense Learn, to reach all these highly individual settings. When asked, most end-users responded that they found that SoundSense Learn helped them in specific situations and that they would recommend SoundSense Learn to others (Balling, Townend, Switalski 2019).
Figure 2. SSL adjustments in the three frequency bands in a sample of 1,860 personal programs (Balling et al. 2019). Each dot represents a unique program; darker colors indicate overlapping programs


These data drove improvements to SoundSense Learn, as they also contained all the A-B comparison settings used along the way. The data enabled developers to fine-tune the machine-learning algorithms to identify the ideal settings for the individual end-user faster and more efficiently. Deep analysis of choices of comparisons made by the algorithm over thousands of sessions was very fruitful. This work meant that the efficiency of the algorithm increased significantly. Figure 3 illustrates the maximum number of comparisons to identify ideal settings, in a given situation. The comparisons or iterations of the algorithm (x-axis) are plotted against the progress to 1.0 (y-axis), which indicates that the algorithm has reached full convergence, i.e. the result is as close to the intention of the end-user as possible. SoundSense Learn version 1.1, in red, needed 17 comparisons (median) to converge. In green, we show the improved version (1.2) needing, on average, as few as 12 comparisons for the same result. Also, of note, is the initial speed of convergence: within five comparisons we can see SoundSense Learn version 1.2 reaching around .75 convergence. In practical terms, this means most end-users experienced improvements in just a few comparisons.
Figure 3. Progression of SSL towards the optimal setting as a function of the number of iterations/comparisons made. It shows both the median performance (i.e. the typical user) and the areas covering 50% and 95% of users. SSL v1.1 is shown in red; SSL v1.2 is shown in green (improved).


SoundSense Learn version 1.3 (released Feb 2019) adds questions on situation (‘Where are you?’) and intentions (‘What is your hearing goal?’), which the end-user answers before starting the A-B comparisons. Both questions are answered from a range of pre-defined answer options. The different situations and intentions and their distributions, shown in figure 4, are based on a sample of 13,813 SoundSense Learn programs created by 5,448 end-users. We see in ‘Situations’ that most of the programs – almost 50% – are created at home, with other situations more evenly distributed. The dominance of the home setting is probably partly driven by this being the dominant situation for this user group (Jensen et al., in press), many of whom are likely to be retired. It is possibly also due to it being easier to create SoundSense Learn programs in a home situation than in other, more dynamic, settings. Looking at ‘Intentions’, we see that conversation, TV, noise reduction, and music constitute the majority of the intentions indicated by end-users. Interestingly, conversation is the most frequently indicated intention, reflecting the importance of the ability to communicate in everyday life. This occurs, despite the fact that performing the A-B comparisons in a conversation setting is likely more difficult than, for instance, when listening to media.

Figure 4. The distribution of programs with respect to situation (left) and of intentions as a proportion of the total number of intentions (right). n = 13,813 unique programs.


An additional aspect that is interesting to explore, in order to understand SoundSense Learn end-users’ auditory realities, is the combination of situations and intentions. Figure 5 shows four groups of situations (home, work, restaurants and noisy venues, and transport) and the top five intentions in these situations. There is variation in the prominence of the different intentions across the different situations, which is further evidence of the wide variety of situations in which personalization of sound is relevant. This is thus in line with the variation of gain settings (Fig. 2), indicating substantial variation in the sound profiles that different end-users prefer in different situations. Figure 5 also shows that the intentions chosen in the different situations are generally in line with what we could expect, with, for instance, TV being a major intention in home settings but not elsewhere, focus being more frequent at work, and conversation and noise reduction being common in restaurants and other noisy settings.
Figure 5. The percentage of different intentions subdivided for four major groups of situations


Although the distribution of situations and intentions in the SoundSense Learn data is generally in line with what we know about the auditory reality of hearing aid end-users (Jensen et al. 2019 in press), there are at least two characteristics of the SoundSense Learn data that must be taken into account in our interpretation. SoundSense Learn programs are generally constructed and used in situations that are, in some way, not entirely satisfactory to the end-user. This means that they potentially represent only a subset of all the situations in which hearing aids are used—even though the subset that they represent is likely to be situations difficult for hearing aid end-users and therefore of central interest to hearing aid development. We should also consider that SoundSense Learn may not be suitable for all end-users or for all situations; for example, the phone intention is likely to be underrepresented in these data, compared to real life, given the difficulty of conducting A-B comparisons while also keeping a telephone conversation going.

While these data do help us understand the auditory realities of end-users, the primary purpose in collecting them is concrete development, rather than more abstract academic understanding. The knowledge we gain about end-users’ preferences for the different situations and intentions serves as input in the continued development of SoundSense Learn.
Real-Life Insights
Another central line of development is getting the audiologist into the loop of information about the personal programs that their patients create. Until now, audiologists have not had direct access to information about personal programs created by end-users within their care. In GPS version 3.4, information on personal programs will, with end-user consent, be included in the hearing-aid log that the audiologist can inspect. Real-Life Insights (RLI) realized the ambition that this information can form a basis for understanding an end-user’s real-life hearing and act as input for counselling. Additionally, trends in settings across personal programs may be used for more general adjustments of the hearing-aid settings. Overall, RLI aims to enrich and strengthen the relationship between end-user (patient) and audiologist, with data-driven insights delivered in a user-friendly and informative display (Fig.6).
Figure 6. GPS v.3.4 (screen shot subject to change). Real-Life Insights display includes personal programs created, names and icons of programs, date of creation of program, number of times used and corresponding program settings.


RLI is not, as with the other data exchanges discussed in this article, possible without explicit consent being given to access those data. As one example, Widex always ensures high levels of encryption and secure movement of data. As shown in Figure 7, specific consent is needed in all data exchanges. Each exchange has a corresponding consent for any possible data connection between Widex, audiologist, and end-user.
Figure 7: Connection map of consent and data: each connection needs a corresponding consent, between Widex, HCP and End-User (EU). PP = personal program. FSW = fitting software.


Conclusion
The use of real-life data in audiology has come a long way since hand-written, end-user diaries and has a long future ahead. Hearing aid manufacturers are beginning to use real-life data to benefit end-users and audiologists. For this future to be possible, trust and respect for the individual's data are essential.    
Laura Winther Balling holds a PhD in psycholinguistics and has done extensive research on spoken and written language comprehension. She now works as an Evidence and Research Specialist at Widex.

Oliver Townend BSc. Hons Audiology, University of Bristol. Previous clinical roles at Charing Cross Hospital, London, UK led him to work for hearing aid manufacturers in European and Asian/Pacific markets. He currently works as a Senior Audiological Specialist for Widex.
References
Kuk F, Bulow M. 2007. Short-term Datalogging: Another Approach for Fine-tuning Hearing Aids. Hearing Review, 14(1): 46-53.

European Commission. 2019. EU data protection rules. 2019. ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en. Accessed 5 Aug. 2019.

Nielsen JBB, Nielsen J, Larsen J. 2014. Perception-based personalization of hearing aids using gaussian processes and active learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(1):162-173.

Balling LW, Townend O. 2018 Real-life benefits of Widex EVOKE: An early look at end-user survey results. WidexPress http://webfiles.widex.com/WebFiles/WidexPress41.pdf. Published October 2018.

Balling LW, Townend O, Switalski W. 2019. Real-Life Hearing Aid Benefit with Widex EVOKE. Hearing Review 26(3): 30-36.

Jensen NS, Balling LW, Nielsen JBB. In press. Effects of personalizing hearing-aid parameter settings using a real-time machine-learning approach. Proceedings of the 23rd International Congress on Acoustics (ICA 2019), Aachen, Germany.