false
Catalog
AI and Machine Learning - Intelligent Today Smarte ...
AI and Machine Learning - Intelligent Today Smarte ...
AI and Machine Learning - Intelligent Today Smarter Tomorrow (Recording)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, everyone, to the webinar on AI and Machine Learning, Intelligence Today, Smarter Tomorrow, sponsored by Widex. We are thrilled you could be here today to learn more about artificial intelligence and machine learning, as well as real-world applications in hearing aid technology, including how Widex USA is using this technology. Your moderators for today are me, Fran Vinson, Director of Membership and Marketing, and Elizabeth Smith, Membership Coordinator. Our expert presenter today is audiologist James W. Martin, Jr., AUD, Director of Audiological Communication at Widex USA. James has worked in the clinical and manufacturing side of audiology for more than 30 years. He earned his Master of Audiology from the University of Tennessee in Knoxville and his doctorate from the Arizona School of Health Sciences. James is passionate about encouraging clinicians, as well as patients, to become the best version of themselves. He was also the winner of the American Speech and Hearing Association's Editor's Award for Published Research in Auditory Neurophysiology. We're so excited to have James as our presenter today, but before we get started, just a few housekeeping items. Please note that we are recording today's presentation so that we may offer it on demand through the IHS website in the future. This webinar is available for one continuing education credit through the International Hearing Society. We've uploaded the CE quiz to the handout section of the webinar dashboard and you may download it at any time. You can also find the quiz and more info about receiving continuing education credit at our website, IHSinfo.org. Click on the webinar banner on the homepage or choose webinars from the navigation menu. You will find the CE quiz along with information on how to submit your quiz to IHS for credit. If you'd like a copy of the slideshow from today's presentation, you can download it from the handout section of the webinar dashboard or you can access it from the webinar page on the IHS website. Feel free to download the slides now. Tomorrow, you will receive an email with a link to a survey on this webinar. It's brief and your feedback will help us create valuable content for you moving forward. Today's presentation is sponsored by Widex USA and represents their view on industry topics, trends, and changes. The content of this webinar has been developed especially for you by Widex and may not necessarily reflect IHS policy and stand on hearing health care issues. Today we will be covering the following topics. What is artificial intelligence and machine learning? The growth of machine learning, learned versus learning, how humans learn, how machines learn and the application of machine learning, and the first real-time machine learning and application in a hearing aid. At the end, we'll move on to a Q&A session. You can send us a question for James at any time by entering your question in the question box on your webinar dashboard, usually located to the right or top of your webinar screen. We'll take as many questions as we can in the time we have available. Now I'm going to turn it over to James who will guide you through today's presentation. James? Good afternoon. I'm glad everybody can be here today. We're excited to be a part of this wonderful event and I'm excited to be here talking to you about machine learning and the intelligent today and smart tomorrow solution that we have moving forward. Before we get started, I want to take a minute and read something that I think is very powerful connecting us to where we are today. Machine learning, as we have seen over the years with technology, has evolved. Today there's a symbiotic collaboration between clinicians and machines and it's now part of our industry's future. A hearing healthcare professional will not be replaced by technology. However, hearing healthcare professionals who don't understand, use, and incorporate technology into their practice will be outperformed by those who do. Technology has really influenced almost every aspect of our lives, from how we buy paper or towels to allowing individuals who are paralyzed to walk again. Artificial intelligence or AI seems to be appearing really everywhere. As a matter of fact, BMW and Mercedes are actually pairing humans with machines to accomplish tasks in the automotive industry that really would have been impossible before. This collaboration that is now taking place really enables these human-machine teams to work in a symbiotic relationship and has actually transformed the way we move forward. This collaboration is now part of hearing amplification landscape. Now AI has many applications and it can feel at times like it's a marketing buzzword attached to products, names, and descriptions. It can become difficult for us as hearing healthcare professionals and even in turn our customers to understand what an AI or artificial intelligence tag means and what that can do. So we're in this evolution of not only just technology, but how we approach technology. So today I want to walk us through some exciting new innovation in the hearing aid industry and what it means to how we will approach patients and even how patients will approach us with future challenges that they have. So before we get started, I want to take a minute and have each one of you listening do a little task for me. If you have a piece of paper handy or a pen, I would like you to write down the answer to this question I'm going to ask you. I would like you to take a minute and write down in detail what you would put in your favorite ice cream sundae. What would be your preferences when building your ice cream sundae? Would you use three scoops of ice cream? Would you use one scoop? Would you use vanilla? Or do you use all chocolate? Would you use sprinkles or bananas or blueberries or strawberries? Would you put marshmallows on it or fudge, hot fudge or caramel? What would you use to build your most decadent ice cream sundae? Now as you think about that and thinking about all of us that are on this call or even thinking about the people in our family and friends, everybody has a preference for what they like. Having that preference means they're going to choose something that maybe we wouldn't choose. Maybe someone says my favorite ice cream sundae would be just a scoop of ice cream and a cherry on top. Some would put a banana. Some would have different flavors of ice cream. Some would put strawberries and cherries. I have a friend who actually puts a raw egg on his sundae. That's his preference. That's the way he likes to eat his sundaes for the protein. Now I would probably never dare to eat that sundae because it is not my preference but it is his. I may try it and we all have our own preferences and intentions in things that we like. What we're finding from an audiological standpoint is that people have a different preference of what they like to listen to as well. If you like to listen to jazz and so do I, your preference of how loud you like to listen to that jazz music may be very different than mine. I may listen to it at 110 dB. You may listen to it at minus 20 dB. It's the same piece of music but the preference is very different. In technology, particularly hearing technology, a lot of what we have built in the lab has been based on assumptions, based on what we think and how the hearing aids have been set to be triggered on or off or changed in certain environments. Those are all based on things we've learned in the lab. It may not be what our patients like. So it's our intention to really, as we move forward, to learn what our patients' preferences and intentions are when it comes to how they like to hear. Now the challenge with this is that it means working through this outside of the clinic, outside of our office, outside of a testing suite, outside of the research lab to figure out those solutions. So today I'd like to offer that I think we're moving forward in a way that we can begin to understand our patients' preferences and intentions and empower them with their hearing like never before. Now we have spent a lot of time at Widex looking at the nature of the hearing process. And we call that the elements of hearing. You've all probably heard and seen something similar to this in that understanding hearing loss, testing, and evaluation is critical for us to begin to move our patients forward in this new hearing journey. But now that hearing aids are so smart, we're now looking at not just the testing aspect but the acoustic scene, the acoustic journey that our patients are taking. And even understanding that how we couple that hearing aid to their ear can impact that acoustic scene or journey, whether we fit them with an open fit or a closed dome or a more occluding fit, that will change the way they perceive that acoustic environment. And with that, we need to and have learned more about the auditory processes that happen when we're hearing. In particular, understanding auditory cognition and what that means for a patient and how they capture sound best. What kind of features and settings do we need to give those patients who may have more auditory challenges? Do we need to give them more fast acting compression or slow acting compression? These are all the elements of hearing that we understand clinically because we have looked through this. We've seen patients. We've worked with patients to evaluate this. But we need to really give our patients the best opportunity to capture sound and to make a difference for people. We really want to make sure we give them the opportunity to capture as much sound as possible. And they want and need to hear more. So we want to understand and have better insights beyond just hearing loss, what our patients need and what's relevant for them in their hearing life. So we need to step back and move maybe in a different direction that we have before. Because we need to bridge the gap that we previously had that sometimes caused a chasm between our patients' real world experience, intentions, and preferences, and what we as hearing health care professionals use as our expertise. How, who, or what an individual wants to hear is described as their listening intention. And the listening intention of hearing aid users is incredibly important and often unrealized in the lab when we're developing this technology. So our challenge is to uncover and meet those listening intentions through a hearing aid in the real world. I'm sure most of you working with patients have heard this. If everybody spoke like you, I wouldn't need a hearing aid. Or I hear great in your office, but when I step outside, I begin to have challenges. Most of us could make our patients very happy if they lived in our office and that's the only acoustic environment they had. We could set that up very, very nicely. You know, we now have speakers in our office that allow us to play sounds and cafe noises and restaurants and all kind of background music to simulate the real world. But what we need is to have a better assessment of what's happening in the real world so that we can build a smarter system that allows our hearing aids to function and give them the best opportunity to hear in the real world, as well as help us as clinicians be able to move our patients forward in their hearing journey. And to do that, we had to truly look at the different dynamics in the environment. We call this the auditory ecology. What is happening? What is the relationship between the noise in a restaurant and the background music in a restaurant and our patient's ability to hear in these environments? So the auditory ecology really is the relationship between an acoustic environment and the perceptual demands of the people in those environments and what they need to capture those sounds. And again, everybody has a preference. So taking the time to understand, measure, and look at those environments has helped us to move forward and develop technology that hopefully will change the way we help our patients and the way they interact with us. Our hope is this technology will then turn our patients into solution bearers. Instead of us having to clinically try to pull out the situations and what the environment was like and when did you struggle in asking those questions, now by having this collaboration with technology, we can actually see what the patient likes in that environment, how they like to hear, and then build a smarter system and have the system continue to learn over time. Now that's a big task. And in order to do that, we really need to have a smarter system. And as technology advances, our devices are getting smarter. One implementation of this that we're using is something called distributed computing. And distributed computing is a way of allowing these network systems to share information between one another to build and identify a better solution. So in this particular case, the accelerated core and the flexible core would be part of the hearing aid. That's the chip set in the hearing aid. By tethering the hearing aid to a smartphone via 2.4 Bluetooth, it allows these systems to work in tandem. It allows the smartphone to do some of the heavy lifting and computational processes that need to happen in order to accurately assess and give our patients these opportunities to hear. But it even goes further than that. It allows us to take that information and share it to the cloud of what people are experiencing all over the world and learning in these different conditions what they like to hear, how they like to hear it, and from a development side, develop smarter algorithms and smarter systems that we can then share and build to the cloud that then can be pushed down to the smartphone and consequently then pushed into the hearing aid. So it's a learning system. It's allowing the technology to build and be smarter over time. These are some of the technological advantages that are needed to do this in real time. We need a fast chip set. We need a sample frequency that is broad to give us all the sounds we're listening to. We need to have a great bit depth, which is showing the accuracy of replication of an acoustic signal to a digital format. These are all some of the fundamentals of what would be needed to implement this system. And it's important because in order to really produce what we want and integrate this machine learning into the landscape of what we do clinically, we need a smart system. And we'll come back to that in a little bit and unpack it. But before we go any further, I want to take a minute and give you a brief history of artificial intelligence and machine learning so that we are all on the same page. So in 1950, Alan Turing published a paper. He was a machine scientist, and he entitled it Computing Machinery and Intelligence. And in this paper, he proposed via a test that he developed that a computer is thought to be intelligent if a human judge can't tell whether he or she is interacting with a human or machine. So he set up two humans in two different rooms and a computer in the room and he wanted one of those humans to communicate with each other. The two humans will communicate and then they will communicate with the computer. If the computer could trick the human to think that he's communicating or typing or texting to another person then it was thought that the computer had won and was intelligent. So with this in mind he had the conversation with John McCarthy in 1956 and at that time they begin to throw around the phrase called artificial intelligence. So that same year at the Dartmouth Academy conference with all these machine scientists together rallying around this thought could computer be intelligent. They came up and with the phrase artificial intelligence and the birth of the field of artificial intelligence was born. Now at that time there was a lot of research that began to flourish looking and thinking about this idea of computers being intelligent what they're capable of. So between 1960 and 1980 there was a flourish of research and papers but because technology wasn't quite there there was no way to implement these concepts until 1980. Between 1980 in the year 2000 integrated circuits begin to flourish and computers became commonplace and so this technology now allowed us to move artificial intelligence and machine learning from science fiction to science fact. Since then artificial intelligence and machine learning have been part of the landscape of what we do whether you realize it or not from a day-to-day basis. Now we call it deep learning you know we have systems that can learn and mimic what humans can do. They can actually understand speech systems can actually see and monitor what's happening around them. Think of cars now with sensors that can tell if something's behind them or moving around them. We now have these smart apps that can take one language and translate it to another language and pattern recognition is very common in big companies looking at trends that happen in the marketplace. So these systems can work a lot faster and quicker than what we can as humans. So looking at that and thinking about artificial intelligence the goal then of artificial intelligence really is to mimic human intelligence. They've built a lot of their systems to mimic the way our brain or our cognitive system is built up with different areas serving different functions. So it's mimicking human intelligence. So it is working at the limit of human potential to do what humans can do. So when we went from having a volume wheel on a hearing aid to one that adapts and turns up and down that would be a way of thinking of artificial intelligence. It's doing what a human could do. Changing a program automatically that's something that a human can do. We just made it now so that a system can do it. So it's mimicking what humans are already good at. Now looking at facial recognition and self-driving cars is now taking that again to a different level but humans can drive cars. Now we've got systems that can do that. So artificial intelligence again up into the 70s worked at the human potential. Machine learning which is under the umbrella of artificial intelligence is stretching that definition of artificial intelligence. It's moving beyond human potential and what humans can do. And so because of that we are stretching now the definition of artificial intelligence. Expanding that to include tasks beyond the human ability. So we want to now build that into a system that allows this collaborative effort between a hearing aid and their smartphone in real time to be able to make adjustments based on a patient's intention and preference with just a simple A-B comparison paradigm. Allowing them to just listen. It's not about mastering parameters and a bunch of buttons and switches. It's about listening. Does A in this condition sound better than B? It's about perception. So that is our goal in implementing machine learning into a device. It needs data. Now data is just the information that it gets to assess the environment that it's in. And hearing aids get a lot of data from the environments that they're you know soundscaping. But data looks very different. And they consider data really the economy of our future. It used to be oil. Now they see data as the fuel of our future. And data as you can see on the screen looks very different. It can like dots. It can be numerical. It can be shapes. It can be different vehicles. It could be a matrix. It could be a waveform at the bottom. So data can be represented in many many ways. And getting the data is critical and capturing it critical to understanding the environment and looking at the trends and patterns that we can see by using these smart systems. As a matter of fact again big data in the cloud is something that big companies are using. Google and Amazon and Apple. They're using these systems to gather the data. To get an idea that can help them in a lot of ways to predict things. They can predict what your favorite toothpaste is by your buying habits. They can predict how you like your air conditioning when you walk into your house by having a smart thermostat. These are all learning systems that learn how you work and how you adapt. And having that information lets them be able to serve you better and give you a better experience. Well it's our goal as clinicians to give our patients the best listening experience that we possibly can. And so when you look at these three types of systems the first would be automation. Automation is something that we've had for a long time. You know hearing aids being able to turn themselves up and down based on the environment is automation. Then automation gets us about 80% of where we need to be. But for those 20% of those patients who are having challenges it may not meet their expectations, their intention, or their preference in a particular location or environment. So automation is fantastic but we need to take it to a different level. And this is where we're taking in the artificial intelligence and machine learning attributes. By looking at artificial intelligence and trying to get the system to now do some of those things that a patient used to do, turning it up and down, we reduce some of that cognitive load that they have to carry. But we need to understand by looking at patterns and trends and associations from a machine learning standpoint what they like and the environments are in so the system can continue to learn what they like and how they like to listen. So then that gives us this separation between what is learned, which would be an automation component. You know we've learned and told it what the system parameters are and when to turn up and when to turn down. That is a learned condition. It will do that all the time. What we want is a learning system so that it continues to learn after the fitting. It continues to learn what they like and how they like to listen and the different environments are in after the fitting. So there is a difference between a learned system and a learning system. More and more today we're moving to a learning system that allows our patients again to know and instill their preference and their intention into a listening environment. So why now again? Why is this becoming relevant for us? Especially in the amplification component of what we do. Well today's smartphones have a high computing power and with machine learning and artificial intelligence you need high computing power in order to accomplish some of these very complicated algorithms that are needed to assess the environment. The more parameters the more complicated the algorithm could be. So you need a smart system. Well today's smartphones have the equivalent processing power of laptops. So then it gives machine scientists an increased the ability to create algorithms for smartphones. Better algorithms, smarter algorithms. We have more available data, more enhanced internet, and now that we're using distributed computing and the cloud it increases our capabilities to provide solutions for our patients in the real world. And as you look at the screen a lot of these companies have been using artificial intelligence and machine learning to solve solutions. For example, Netflix. We all like to watch Netflix and when you think about Netflix if you pick up and start watching a movie like Star Wars and you really like it, you're excited about it, as you're watching the movie behind the scenes the algorithms and the computations are looking at the rating of the movie. Who are the actors? Who are the actresses? What is it science fiction or drama or comedy? It's looking at the parameters that would describe that movie and when you finish watching the movie it is already queued up other movies that fall in that same category or domain. So now it's queued up Return of the Jedi, Empire Strikes Back for you to watch later. That is a machine learning artificial intelligence application. Pandora, if you get on there and say you like Garth Brooks maybe as an artist, it's gonna pull up Luke Bryan and other artists that fall in that same genre that you may like. Google if you happen to buy that red shirt for that special event, the next time you get on Google after the event you're gonna find that it's gonna say you may like this red shirt or this style blue shirt or this style red shirt. It's learning what you like. Restaurants even do it as well. So this is something that is happening whether you realize it or not behind the scenes. It is now again though helping us as a tool to give our patients a better experience and understand their preference and intention in different environments. So facial recognition, you know you see somebody you know you recognize them. Systems now use this all the time. When you look at your smartphone it will look at your face and open up and unlock for you. Being able to talk into your smartphone to get directions or ratings and all those things are part of this whole process of machine learning and artificial intelligence and this electronic cognition that we're trying to give these systems to be smarter for us. So if that's the way we're moving, again what does this mean from a learning standpoint? Because humans learn very differently than machines. So I want to take a second and just give you a little bit of information about four modalities that really describe how humans learn. And generally speaking we are one or more of these modalities. You're not strictly just one. You usually have a combination of them. But to start out let's start out with visual preference. A visual preference usually is the person who learns best through demonstration. They need to see it. And students who are learning are 60% of them are visual learners. They like it. They get more information that way. They feel more comfortable. There are people though who are auditory learners. They learn best through verbal instruction and by listening. In a lot of cases when we are learning new things we like to have both elements of seeing it, hearing it, and having it demonstrated. We like that because it gives us the ability to talk and share that information and learn that information maybe in a way that we would not have before. The next type of modality would be what we call tactile preference. Learn best when they take notes and they're listening to a lecture or they're reading. Multiple ways of engaging and capturing information. Hands-on activities. They like that. You know they're tactile. And then finally there's the kinesthetics. Kinesthetic learners generally are children. They like to be involved and they're very active in the learning process. For instance if you give a baby a smartphone and in their hand the first place they will put it is in their mouth. They're tasting it. They'll bang it on something. They'll hold it. They'll move it around. That is because they're kinesthetic. They take it all in. Thank God we outgrow that as adults. We don't have to put a smartphone in our mouth to taste it to know that you like the data plan that you're getting. Or if you buy a pair of running shoes you don't put them in your mouth and go they taste good. They're gonna help me at mile 15. You know we outgrow that component. But these are the modalities that we use and again combination of them as humans to learn. Machines don't use these modalities. They use a different set of tools to help them learn. So machines learn through what we call three structures. The first would be what we call supervised learning. The second would be unsupervised learning. And the last would be reinforced learning. So let's jump back up to supervised learning. Supervised learning is how you need a human interaction. There's human training that helps from observations and feedback to train a system to learn. So there is an interaction because you're being trained. Think of it as a teacher teaching a student to play the cello or the violin or the piano. They're gonna show them where to put their hands and how to talk and and those kind of things. That is supervised learning. It is structured in a specific way. Unsupervised learning relies on clustering data. There is no training per se involved with that. It is a longer process. They learn by trial and error in some instances. But in most cases it's data. Looking at data and looking at what comes out of that data to make predictions or recommendations. That is unsupervised. There's no teacher-student relationship. And the last one, which is very common in gamers or developments of games that we have now for Xbox and PlayStation, is reinforced learning. Reinforced learning is like, again, a trial and error or a learning step over time. So like playing the game Pac-Man, you would learn which way not to go based on what happens to you. The next time you play that game you won't go in that same direction. You may go in a different direction. So it's a learning process but more like a trial and error. Those are the three structures that are in machine learning. In this technology that I'm going to show you in a second, there are these three components that are in the technology. Supervised, unsupervised, and reinforced. So if I asked you briefly to answer this question, just to talk about the complexity of how humans process and how systems can work. If I ask you to solve this equation at number five, most of you say, oh the answer that's 24. And you'd get there by looking at the combinations right to left or the combination vertically. You know, they're doubling as they go. And so you come up with 24. Humans can do two or three columns of this kind of mathematic equation solving easily. Once you get beyond that, like this equation or test, it's very difficult. But this is where systems shine. The more areas, the more things that there are for them to use, then the more complex. But computers can do this all day long, right? So the more iterations, the more environments, the more things, it allows a system to be smarter. That's why you need that collaboration when you're talking about these systems working together because of the strong computational component within these systems. So let me walk you through a scenario here, how it would work using SoundSense Learn. This is a machine learning tool that is part of a hearing aid, the Widex Evoke. If a patient is in an environment like sitting with a couple, and they're enjoying lunch, and the hearing aid is adjusting, for the most part, like they want, but they'd like to hear more. The hearing aid is assessing the environment. The patient could just open up the app, and it will give them two options, an A listing situation and a B. And they just have to choose which one they like. It would maybe ask them, where are they? Are they in a certain restaurant, or a certain place, or a certain environment? And what is their intention in that environment? But then it would start moving through these different iterations for them. So it's this simple process, but it's about perception, not mastering parameters. Let me even give you a different look here. So within the app, there is a EQ section. Has three different regions, base, middle, and treble. Within each one of those regions, there are 13 adjustments that can be made. So wearing one hearing aid, and assessing the environments with that one hearing aid, there are over 2,100 combinations that can be used to solve and fine tune that environment. When you add the other hearing aid in a binaural situation, there are over 2 million combinations and comparisons that could take place. So a human sitting there would take them forever to do this. But if you do it with a machine, and the learning can go in and quickly assess what they like and they don't like based on their purposes and intentions, this can be solved very quickly, even within moments. So it allows our patients to have personalization in moments based on what they like simply using an app and listening. So in the app, you would see your A and your B profiles. And at the bottom, there would be a little slider. So once you choose a like A over B, you would slide it to you like it a little bit, a lot, to either profile. After that, you would hit Continue. It will continue to refine and find a solution for you based on your preferences and what you like. Almost like looking at deciding if you like red wine or white wine. Once you decide which one you like, within each one of those categories, there are a plethora of other different tastes and different wines that are white and different wines that are red that you can choose from. This is doing that and taking it to that level where you can refine what they like to hear. At any moment, if they get it to where they like it and they're happy, they can step out of it instantly and quickly. So it allows, again, this personal preference that we have. And we did some field trials with different people with different conditions using this real-time environment assessment via the hearing aid and the app. And we had a lot of patients share with us what they liked and the things that they enjoyed. These are a few of their comments. I don't have time to read through all of them, but we saw a lot of adjustments that the patients were making now in environments that they had challenges with, and they're very happy. And in particular, the last patient that we saw here said the environment he was in at the pool made it easier for him to communicate, but it was much less stress for him in that environment because he was able to adjust it based on his preference and intention. So again, it allows you to have these three attributes and these aspects that are happening. Widex is the only manufacturer currently using this type of iteration to give our patients the ability to make adjustments based on their preference and their intention. The real-time user-driven learning and the cloud-based network learning, we are the first in the industry to implement this. We call that SoundSense Learn. We also have a slower gradual learning component called SoundSense Adapt that takes into account as they get used to their device and making adjustments and changes, it is more of a slower adjustment for them. So we have both components within the device that are working. I wanna show you briefly behind the scenes because at first, we want the graphic user interface to be simple and easy for our patient. They see their choices A or B. But behind the scenes, there are a lot of calculations happening. So I wanna show you real quickly an iteration. Now, our goal is to give the patient their best opportunity to capture. You see the dots that are forming on the screen. Those are based on a positive and a negative response. We want yellow, that's what we're wanting. We want to have that experience that we're getting from the yellow color in the graph. As more components and more data comes in, it allows it to refine. And so we can identify what is it they like in that environment and what are the things that they don't like. Very quickly, very easily. So again, behind the scenes, there's a lot of adjustments and changes going on based on the preference and what the patient is sharing in the app. It allows us to quickly make adjustments to make the first iteration to get to 100% satisfaction. They maybe had to make 16 to 20 adjustments in the app to do that. To get about 60% of where they want to be, they'd make about 10 adjustments. With the new update in the algorithm though, it takes them less time. To get to 60% only takes about five iterations for them to go through what they like A versus B and to get to 100%, about 12. So the systems are getting smarter as we move forward compared to what we had before. So just walking through this process in that environment, if they wanted to hear something different, whether it be music, speech, whatever, they would pull open the app and go to SoundSense Learn. It would give them two profiles, A versus B. They select which one they like the best and they can make a designated adjustment. They like it a little bit more, a lot more. It'll continue to do that over time. It could do that up to 20 times for them as they move. But if they find that they like it after they've made two or three adjustments, they can exit out and save that. It also gives them a progress wheel so they know where they are within the process. And so this is a beautiful thing because all of us like now to have the measurements. We wanna know how many steps we took, how many bottles of water we've had today. And so those measurements help us feel like we're moving forward and making progress. So this visual data-driven improvements here allows our patient to have this real-time visual adjustment as they're making these changes and modifying the program for themselves. So it provides instant feedback and they can move through those comparisons very simply. New component that we've now added within that is that it now will ask you, where are you? Are you in a classroom? Are you in the theater? Are you at home? Are you in your car? And once you identify where you are, say that you say you are at work, it'll then ask you what you want. What is your intention in that environment? Do you want to focus? Do you want to reduce the noise? And then based on your intention, it will then again continue to make adjustments. Implementing this, we saw about 1600 programs created by users during our testing. And what was interesting, we saw 141 new programs developed by people at work. And MarketTrack told us that people at work are usually satisfied with their hearing aids, but there may be opportunities to continue to improve or make an adjustments that they did not have before. So it allows us again to continue to improve, to be intelligent today, to be able to give them the technology, the advancements that they want right now, but to continue to make the device smarter in the future by using SoundSense Learn to learn their behaviors and adapt by being able to have these automatic firmware updates as it learns and as new algorithms are built that can be sent directly to the hearing aids. These are all part of the new world that we as clinicians get to be a part of. And so again, I want to end with what I said before. Hearing has evolved. And again, we are in a symbiotic collaboration between clinicians and machines, and it's now part of our industry's future. Hearing healthcare professionals will not be replaced by technology. However, hearing health professionals who don't understand, use and incorporate this new technology into their practices, they will be outperformed by those who do. So it was an exciting time for us. I encourage you guys to learn more about SoundSense Learn, what that means, the real-time machine learning tool that is now available that can help you turn your patients into solution bearers when they come back to see you, showing you the solution that they like and that they like to hear in that environment. So thank you for your time. Let's see if there are any questions for us. Thank you, James. We do have some time for questions, everyone. So if you have a question for James, please enter it in the question box on your webinar dashboard. James, our first question is from Joe. He wants to know, does using SoundSense Learn impact the battery life of the hearing aid? Thanks, Joe, for that question. Using SoundSense Learn does not impact the battery life of the system. This particular system actually has kind of a hybrid system. So it turns features off that are not needed in different environments. So it's like sipping current. And because a lot of the adjustments are made on the app, it's not impacting the usefulness and the ability of the hearing aid to provide that solution. Great, thank you. Mary wants to know, when would you introduce SoundSense Learn to a patient? At what point? That's a great question, Mary. This would be a tool that I would introduce as a follow-up. I would never start a conversation with SoundSense Learn and jump right into that component. I would get them programmed first, get them set up, get them comfortable with getting it in and out of their ear and all the things that we do on a regular basis. And then later on, as challenges occur, show them how easily it is to use this tool that is already on their phone, simple A-B comparison, and just walk them through that. OK, thank you. So Stephen wants to know, well, he says it sounds like this is an iPhone, SoundSense is an iPhone and Android app. But what about people who might not have a smartphone? What if they have a tablet or a PC that are Bluetooth-enabled? It will work with an Apple tablet like, you know, an iPad or that kind of thing. And so it will work with those devices. And for older patients, it's actually nice that they have that bigger viewing surface that they could, you know, make the adjustments on. OK, thank you. So Kate wants to know, do you feel this is difficult to explain to a patient how to use the app and how to get to, you know, how to use the whole process to find the best result for them? Is it difficult to explain to a patient? It's not difficult. But, you know, I would always encourage, and this is why, you know, as we think through what these opportunities of technology offer us, that we develop a talk track. And when do we introduce it? When do we talk about it? You know, it wouldn't be something that I would immediately, again, throw out to a new user, you know, maybe somebody that never uses a smartphone. This may not be the time to talk about that. But somebody who is in different dynamic environments, who likes the thought of being able to make those adjustments themselves, it would be a great opportunity for them to have it. For us, it's just a matter of practicing what that talk track would be. And it's simple to walk them through the AV comparison and the way they can exit out and save it as a program. So it's a very simple program to use. But, again, I would introduce it after they're comfortable with their hearing aid and they've got all their fittings and you've fine-tuned it and you've tweaked it so that they're happy. Thank you, James. Wayne wants to know if SoundSense Learn works with both VTE and ITE hearing aids. That's a great question, Wayne. Currently, right now, in its form, it is available in a VTE. We call it the Fusion 2. And so just VTEs right now. And, you know, as technology advances, we'll see what happens in the ITE component. But right now, it's just available in a VTE. Thank you, James. Susan is asking, where do you see AI and machine learning going in the next five years with regard to hearing aids? Oh, my gosh, there's so many areas. You know, as we get more data and learn what people want, you know, we're actually seeing implementations of people using different types of fall detection and those things that are coming with the machine learning. It's going to allow us to see and have more interaction, our patients particularly, with the world, and even kind of bridge that gap that we've missed before trying to communicate. It will allow us to be able to intersect and do more things than we ever thought possible. Trying to pick out where we would go would be just, I would probably be wrong, but the technology is so advanced that as we get smaller and faster and more capable, you know, the sky's the limit. We're unlimited in what we could produce in these kind of technologies. And so it's an exciting time. And the thing is, technology is going to grow. As we learn what we learn now, we're going to continue to learn and build off of that moving forward. So it's a good time to really get in and learn what this technology is and how it will impact your practice to be proactive and not reactive in your practice implementing this technology. Awesome, James. Thank you. We've got a couple of comments from viewer Robert, so I'm going to sort of paraphrase. So he was asking and sort of stating about technology that he believes it will eventually eliminate all interactions between the professional and the patient. So what do you think of that, James? Where do you see the human part of what hearing healthcare professionals do? Where do you see that going in the future? Well, this technology is never meant and intended to replace what we do. There has to be a human connection here. So, you know, the fitting, the programming, that is all still done by us. This tool is about giving us the opportunity to be visible and part of a solution when we're not with the patient. You know, patients, when they're wearing their hearing aids, have them and use them in moments that are important to them, you know, at weddings, at the birth of children, you know, at different occasions. And when those occasions come and go and they miss it, they've missed out an opportunity that they can't get back. So this is never meant to replace what we do. It's meant to accentuate and collaborate with us. Just like, for example, if you think about your GPS in your car, you know, some people have it on their phone, they have Waze on their phone or different devices that they use for GPS. When you turn on that device with the location of where you want to go and you hit go or start, that device does not suddenly drive for you. You still have to drive the car, push the brake, push the gas to get there. It assists you. It tells you at this next road, make a left. Or it may have the insight to pull up other data information and say, oh, there's an accident on that road that you normally take. Take this instead so that you bypass that. That is what this is doing. It's a collaborative effort. It's never meant to replace. And I know somebody says, oh, but there's driving cars. Yes, there are driving cars. This is meant to be a collaborative effort with what we do to make things easier. Well, thank you, James, for an excellent presentation. And I want to thank everyone for joining us today on the IHS webinar, AI and Machine Learning, Intelligent Today, Smarter Tomorrow, sponsored by Widex. If you'd like to get in contact with James, you may email him at jamr at Widex.com. For more information about receiving a continuing education credit for this webinar through IHS, please visit the IHS website at IHSinfo.org. Click on the webinar banner or find more information on the webinar tab on the navigation menu. IHS members receive a substantial discount on CE credits. So if you're not already an IHS member, you will find more information at IHSinfo.org. Please keep an eye out for the feedback survey you'll receive tomorrow via email. We ask that you take just a moment to answer a few questions about the quality of today's presentation. Thank you again for being with us today, and we will see you at the next IHS webinar.
Video Summary
Widex is sponsoring a webinar on AI and machine learning in hearing aid technology. The presenter, James Martin, explains how artificial intelligence and machine learning can improve the hearing aid experience for individuals. He describes the different ways in which humans learn and how machines learn through supervised, unsupervised, and reinforced learning. Martin introduces SoundSense Learn, a machine learning tool in the Widex Evoke hearing aid that allows users to personalize their listening experience. The tool uses real-time user-driven learning and cloud-based network learning to optimize hearing aid settings based on user preferences and intentions. Martin also discusses the future of AI and machine learning in hearing aids and emphasizes that technology will not replace hearing healthcare professionals, but rather enhance their ability to provide better solutions for patients. He encourages clinicians to embrace and incorporate this technology into their practice to stay ahead of the industry.
Keywords
Widex
webinar
AI
machine learning
hearing aid technology
SoundSense Learn
personalization
user-driven learning
cloud-based network learning
×
Please select your language
1
English