false
Catalog
Advanced Wireless Processing for Enhanced Binaural ...
Advanced Wireless Processing for Enhanced Binaural ...
Advanced Wireless Processing for Enhanced Binaural Hearing Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, everyone, to the webinar, Advanced Wireless Processing for Enhanced Binaural Hearing. We're so glad that you could be here today to learn more about wireless technology for hearing healthcare professionals. Your moderators for today are me, Ted Anup, Senior Marketing Specialist. And me, Keri Peterson, Member Services Supervisor. Our expert presenter today is Leanne Powers. Leanne is an Education Specialist for customers as well as Savanto staff on Siemens products, software, and services. In this role, Leanne has given several lectures at AAA and various state conventions on a variety of topics, such as compression in modern hearing instruments, frequency compression and tinnitus therapy, as well as presentations specific to Siemens products and technologies. Leanne has practiced in a variety of hearing healthcare settings prior to joining the Savanto team. Most recently, she has operated two hearing aid offices in the Chicagoland area. We're very excited to have Leanne as our presenter today, but before we get started, we have just a few housekeeping items. Please note that we're recording today's presentation so that we may offer it on demand through the IHS website in the future. This webinar is available for one continuing education credit through the International Hearing Society. We've uploaded the CE quiz to the handout section of the webinar dashboard and you may download it at any time. You can also find out more about receiving continuing education credit at our website, IHSinfo.org. Click on the webinar banner on the homepage or choose webinars from the navigation menu. You will find the CE quiz along with information on how to submit your quiz to IHS for credit. If you'd like a copy of the slideshow from today's presentation, you can download it from the handout section of the webinar dashboard or you can access it from the webinar page on the IHS website. Feel free to download the slides now. Tomorrow, you will receive an email with a link to a survey on this webinar. It is brief and your feedback will help us create valuable content for you moving forward. Today we'll be covering the following, an overview of wireless technology, an introduction to ear-to-ear wireless 3.0 and binaural processing, research benefits, and application of features. At the end, we'll move on to a Q&A session. You can send us a question for Leanne at any time by entering your question on the question box on your webinar dashboard, usually located to the right or top of your webinar screen. We'll take as many questions as we can in the time we have available. Now I'm going to turn it over to Leanne and we'll guide you through today's presentation. Take it away, Leanne. Thank you, Ted and Carrie. I'm so excited to be able to talk to you today about wireless technology. We want to start by looking at the fact that wireless technology has been available for more than 10 years now, starting in 2004, where Siemens was the first to link the two hearing aids together, utilizing E2E wireless technology, so ear-to-ear communication. At this point in time, all manufacturers have some form of ear-to-ear communication, and what we hope is that through this presentation, you will gain a better understanding of the different types of ear-to-ear communication that are available, and therefore be able to have that great conversation with your manufacturer of choice as to what type of ear-to-ear communication they have and how those features will benefit your patients. So looking at the timeline for Siemens, you can see that in 2004, that's the beginning of ear-to-ear communication. In 2008, we had the Bluetooth revolution that we're going to talk about, and then in 2014, we had the binaural exchange of auditory information. So ear-to-ear communication, what that means is that your hearing aids can actually create a personal area network, oftentimes referred to as a PAN. We do this at Siemens, utilizing near-field magnetic induction. This transmission allows for the ear-to-ear communication and allows you to stream at the same time. It has a lower battery consumption than radio frequency transmission, which is why at Siemens, this is the route we chose to use. Some manufacturers may use radio frequency transmission or near-field magnetic induction. For near-field magnetic induction, there's no effect on the hearing instrument battery consumption. So what does it mean to create that personal area network? Well your two hearing instruments work together as one hearing system. This couples signal processing and ensures that volume level, listening programs, and the microphone modes are always in sync. It simplifies the wear operation, so one touch controls both hearing aids. With EUTE coming out in 2004, researchers take a look at how this technology was going to benefit patients. And so the first study I'd like to show you is from Hornsby and Ricketts, and it was a Vanderbilt study that was done in 2005. And what they were looking at here was when the hearing aid microphone modes were in sync, and they found that the HINT scores improved by 1.5 dB, which equates to nearly a 20% improvement in sentence recognition when both side microphones, so your left and right hearing instrument, were both in directional mode together or omnidirectional mode together, depending on the needs of the environment. Other studies that followed include a study by Kaiser in 2006, looking at right-left localization error being the largest when an omnidirectional mode is used for one hearing instrument and a directional mode used for the other. So again, finding that localization was better when the hearing instrument microphone modes were kept in sync. Hornsby and Mueller in 2008 looked at, after fitting your patients to prescriptive targets, that some wearers were ending up with a very large mismatch in volume if they had volume controls available on their hearing instruments. So this means if your patient was not hearing well and they turned one hearing aid up, they would often make mistakes in turning one up more than the other. So again, more support for having that one-touch communication where your patient only needs to adjust one volume control and both hearing instruments will go up at the same rate. Smith et al. in 2008 further looked at user preference. So two-thirds of wearers preferred linked instruments and a highly significant positive correlation for linked instruments was associated with better sound quality. Another study was performed by Piku and Ricketts in 2010, and that was actually looking at the wireless transmission between hearing aids when utilizing them with the phone. So what they found was that when you were utilizing binaural transmission of phone conversations, the speech intelligibility improved significantly. So the graph on the right-hand side is showing you the traditional use of holding a phone just up to the microphone of the hearing aid on one side, compared to holding the phone to the ear on one side utilizing a T-coil within that device, compared in the third bar to utilizing E2E wireless, so wirelessly streaming the phone to one hearing aid, and then finally wirelessly streaming the phone to two hearing aids, which was found to be ideal and give you the best speech intelligibility. So the benefits of ear-to-ear communication at this stage were the synchronization of volume and program controls, which reduced patients' effort by 50%. Splitting the controls also allowed for more discrete products. You didn't need to try to fit a push button and a volume control on a custom product faceplate on both hearing instruments. You could actually split those controls, and that helped when patients had dexterity issues where they had a hard time telling the difference in which button or which switch was touching on the hearing instrument. It also helps to improve localization due to synchronization of the directionality, localization errors being reduced by 50%, and improved speech intelligibility with matched microphones. The binaural advantage for phone calls being 7.5 dB signal-to-noise ratio improvement over unilateral streaming of phone calls. So that's what we had through 2004, and then when we look at what happened in 2008, the phone conversation is a great lead-in to what we at Siemens call E2E Wireless 2.0. This is the ability to stream external sources through either a transmitter or directly to the hearing instruments. It allows for wireless compatibility with Bluetooth-enabled devices. Wireless streaming with the E2E communication of hearing aids and Bluetooth together can connect us to video, audio, and phone conversations. It also allows for wireless programming. This was, again, a revolution on the side of the fitter because it made it easier for us to connect the hearing aids to our fitting software. At Siemens, our wireless programmer is called the ConnectLink, but most manufacturers today have a wireless programming option that you can use. So let's look at some basics of Bluetooth. Bluetooth is a wireless communication platform for electronic devices to share information like music, voice, video over a secure, globally unlicensed, short-range radio frequency. Bluetooth is secure because it utilizes frequency hopping and changes frequencies 1,600 times per second. It's a pseudo-random sequence that's utilized and is known only to the transmitting device and the receiver, hence the need to pair the devices to each other. This pairing helps ensure a secure and interference-free connection for wireless transmission. So let's look at the history of Bluetooth. Bluetooth first made its emergence on the technology scene in 1994, and it was developed by a group of engineers from Ericsson. It wasn't long before they realized that Bluetooth was going to revolutionize a lot of technology industries, and so a special interest group was created in 1998. This special interest group is devoted to maintaining this technology and further advancing it for the use of all technology users. Bluetooth was originally intended to be a wireless replacement for cables, wires, and to connect devices. So if you look around your desk, you may already have a wireless keyboard, a wireless mouse, you may have a wireless phone. You have all sorts of wireless devices that are made possible by Bluetooth transmission. But Bluetooth can do even more because it can allow connectivity between additional devices, being TVs, music players, and even home healthcare devices. Bluetooth advances just as our E-to-E wireless communication has advanced. So when we talk about our E-to-E wireless communication, you'll hear me talk about E-to-E wireless, E-to-E wireless 2.0, and then E-to-E wireless 3.0. In Bluetooth, we have the same thing happening. When Bluetooth was first utilized by Siemens using our Minitek, it was on Bluetooth version 2.1. A couple of years later, when we used our new streamer, the EasyTech with Bluetooth, the Bluetooth version was actually 3.0. And this year, in 2015, Bluetooth 4.0 was introduced. So the versions of Bluetooth will continue to evolve as well as the technology on our side. And this can make differences in how we connect and use Bluetooth. Not only do we have different versions of Bluetooth being created, but there are also many, many protocols that are available for Bluetooth devices. If you do a quick Wikipedia search on Bluetooth protocols, profiles, you will find a list of more than 20 different profiles that are available. From the hearing instrument industry, we are concerned with primarily two different Bluetooth profiles. The first is HSP or HFP, Hands Free Profile or Headset Profile. That's used primarily for the phone. The second profile that we utilize within our devices is the A2DP profile, and that's primarily for media audio transmission. So if you look at the way Siemens utilizes these two profiles, our Bluetooth phone program that you can initiate in the software to have a unique frequency response for somebody when they're streaming the phone conversation is the HSP HFP profile. Our audio streaming profile, though, which is utilized by our transmitter for the TV and utilized by many newer smartphones for transmission of audio, such as streaming Pandora, streaming YouTube, and even streaming their voicemail, the phones will utilize that A2DP profile. This corresponds for Siemens in the software to our streaming audio program, thus giving you a unique frequency response that you can use if you're streaming music, TV, and you wanted to create a different listening pattern for that streaming ability. So let's talk for just a minute about how we pair devices for Bluetooth transmission. Because again, we need the two devices, the transmitter and the receiver, to be introduced to one another so that they can enable the Bluetooth transmission. So pairing is introducing them, and I like to think of this in simple terms if I'm trying to describe it to a patient as, if you wanted to be paired to me, if you wanted to be able to contact me, I'd have to give you my phone number. Without my phone number, you have no way to connect to me. So pairing is the act that I give you my phone number, and at any point in time, you can then connect with me, which is the next stage, by actually calling me. So the pairing is just the introduction of the two devices together. Connection is after pairing when the two devices actively participate with one another. Streaming then further is when you're sending information from one device to the other. So you can be connected but not actively streaming, and you can be paired but not actively connected. When you're actively streaming, the standard Bluetooth protocol calls for you to be within three meters or 32 feet, the transmitter and the receiver. So if you have your cell phone hooked up to a streamer, and that streamer then connects to your hearing instruments, you can get about 30, 32 feet from your phone and still talk on the phone, answer the phone, and have that conversation. When you go outside of that range, your transmission will start to become interrupted, and if you go too far outside of that range, you will lose the connection. Although you may be paired and or connected to more than one device, you can only actively stream from one device at a time. Multiple devices can be paired to one streamer, but again, you can only stream to one device at a time, and usually the streamer will have some sort of priority where certain stream signals can interrupt others. So the streamer can be able to interrupt or pause audio from another device. In the case of the setup for Siemens hearing instruments, our HSP, HFP profile takes precedence over the A2DP audio profile. What that means to your patient is if they're watching TV and they get a phone call, the phone will actually interrupt the TV watching, the hearing aids will ring in their ears. The patient then has the option to answer the phone or bypass that phone call. If they answer the phone when they hang up, they will go back to their original streaming of the television. You can also find patients having one device having a priority over the other as set up by the phone itself, and so oftentimes the smartphones of today will have a setup so that car, if you have a Bluetooth car, that the Bluetooth car will take precedence over any other streamer. So it's not uncommon when somebody gets into the car that the car automatically takes the precedence and takes over as the connected device, bumping you from your other hands-free streamer. This is something that can sometimes be adjusted in the phone itself so that your patient can decide if they'd rather have the Bluetooth car or rather have their streaming direct to their hearing aids. Because remember, you can only have one device streaming at a time, so you can't have the car and the hearing aids connected. The hearing aids connect to the phone through the streamer, the car connects to the phone, but the car doesn't connect directly to the hearing instruments. So the practical benefits of utilizing Bluetooth in hearing aids is that your hearing aids can become a wireless headset to stream movies, podcasts, and even instructions from Google Maps. The stream signal can be amplified and shaped to match the wearer's amplification needs. The volume of the stream signal may be controlled by the streamer or by the hearing aids. And stereo signals are streamed in stereo. We always hear better when we get things in both ears at the same time. So let's take a look at what this means as far as connecting to different devices. Again, there are different types of connection that are available for different devices depending on which manufacturer you fit. And so what we're going to look at now is if the manufacturer that you choose to fit has a streaming device that communicates with the hearing aids, these are some options that that streaming device may include. There's usually a direct line-in option, which means if you have a device that maybe is not Bluetooth-enabled, you can use a standard audio cord and plug directly into the streamer. The streamer then uses the near-field magnetic induction to stream that signal directly to the hearing instruments. This can extend the streaming time because direct line-in does not pull as much battery consumption. There are TV transmitters. So this transmits the audio signal in stereo to both hearing instruments. There are some manufacturers that go direct from the TV to the hearing aids and some manufacturers that go from the TV to a streaming device and then to the hearing aids. Either way, it allows for comfortable listening for the wearer and their companions because you can have individual volume settings for the hearing impaired as opposed to the significant other and it overcomes the issues of disturbance from ambient noise and reverberation. Now with advances in Bluetooth, we now see that the emergence of smart TVs that actually have their own Bluetooth transmission. Remember that some of the manufacturers for hearing aids offer TV transmitters which speed up the Bluetooth transmission to avoid lip sync errors for the wearer. With the advent of smart TVs, you could actually pair a streamer directly to a TV but we can't guarantee what level of Bluetooth that TV is using. So if that TV is using an older Bluetooth version, it may have lip sync errors and so a transmitter may be required. Again, those transmitters can be obtained from your manufacturer of choice for the hearing aids. Third party Bluetooth devices allow for streaming for other devices that utilize Bluetooth transmission such as computers and tablets. After a third party device is paired and the Bluetooth function is enabled, the third party device will be able to find the streamer and stream signals directly to the hearing instruments is also sometimes possible. You can watch a movie, Skype, or visit with a friend utilizing your hearing instruments as a wireless headset. There's also the ability to use remote mics. Remote microphones are intended to improve speech understanding in background noise by taking advantage of the spatial separation between the signal of interest and the competing noise. This is really helpful when patients have poor word understanding and they need a significantly improved signal to noise ratio. So it's a companion mic. The companion that they're with can wear the mic and be streamed directly into the hearing instruments, sometimes utilizing a streamer, sometimes direct into the hearing instruments, again, depending on which manufacturer you fit. Additional benefits are the use with FM systems. So remember, Bluetooth is one-to-one. FM, or frequency modulation, is one-to-many. FM systems are commonly used in schools where you have one teacher talking to possibly many students wearing FM devices. So that's the difference between the one-to-many and the one-to-one of Bluetooth. However, utilizing streamers with your hearing instruments opens up the possibility that you can use an FM directly into a streamer and therefore stream even to CICs that wouldn't otherwise be able to accept the receiver for the FM device. This also can give you extended stream time for FM of nine hours. So talking about extended stream time, it's important to note that all of these advances in technology do have some effect on battery life. And within hearing instruments, because our batteries are so small and our current need is so high, battery life is very important to us. We also know that battery life is extremely important to our patients because of countless market track surveys where patients discuss their unhappiness with current battery life that they receive. So what you're looking at here is different manufacturers use different ways of streaming. Some manufacturers have microphone-only modes and then streaming modes. And a couple of manufacturers now have what we're going to talk about next, which is binaural wireless transmission, which Siemens refers to as E2E 3.0. And so what you can see is that depending on what manufacturer you use and what mode of streaming they use, their current consumption is going to vary. We're very proud at Siemens that we've been able to offer both your standard microphone mode, your streaming and microphone mode together, as well as the binaural wireless without much of an increase in current consumption. And this is important to us at Siemens because we offer rechargeability. So we need to be able to still give your patients a full day of life on rechargeable hearing aid batteries, even if they decide to take advantage of the great world of streaming. So let's talk about that next stage of wireless, which is ear-to-ear wireless 3.0. This is the exchange of auditory information at the microphone level. If you look at the picture on your screen, it's depicting a patient wearing two RIC instruments that each have two microphones on each side. And we use directional microphones like that so that our hearing aids can use the information of the timing and intensity delays of signals reaching those microphones so we can establish a hypercardioid pattern or an anti-cardioid pattern. In other words, a frontal focus or a rear focus. With audio exchange between the microphones, my left hearing instrument now gets input from all four mics, the two on the left and the two on the right. My right hearing instrument gets input from all four mics, the two on its side and the two on the left. This creates what we at Siemens call a virtual eight-microphone network. The transmission of data occurs seamlessly between the two hearing instruments and the two hearing instruments make decisions on how to function together. We also have an option for hearing aids that only have one microphone. If you only have one microphone on each hearing instrument, such as in a CIC or IIC, then you can utilize a virtual four-microphone network. You still have an exchange of auditory information which analyzes the timing and intensity delays in order to establish where a frontal pattern is so that we can utilize the filtering systems in our hearing aids to improve the patient's understanding in noisy environments. At Siemens, we call this high-definition sound resolution. Some key points and definitions in what we're going to talk about next. Binaural beamforming refers to the transmitting of audio signals from one hearing instrument to another. So if you are interested in finding out if your manufacturer of choice utilizes auditory exchange, you can ask them about binaural beamforming. Narrow directionality is the name of an enhanced binaural beamforming algorithm that we use in Siemens instruments. And of course, spatial speech focus is an algorithm that we use in Siemens hearing instruments, which is a self-steering binaural beamforming algorithm designed for situations where the targeted speech originates from the side or behind the wearer. So let's take a look first at the binaural beamforming utilized in Siemens, which is narrow directionality. Narrow directionality is designed to enhance the signal coming from the target speaker located among multiple other competing or interfering speakers around the listener. If you look at the graphic on the left-hand side, this is a situation where most of our hearing instrument wearers have difficulty when they're surrounded by other speech. It improves the speech signal from the target speaker in two ways, by quickly reducing the competing speech signal outside the beam's angular range, and by boosting the level of the target speaker within the beam. Let's take a closer look at how this works. Narrow directionality is built on top of our existing monaural directional microphone system within Siemens instruments. So shown here is a simplified block diagram of Binax narrow directionality, where it's composed of a monaural processing stage followed by a binaural processing stage, which takes the inputs from the local signal and the contralateral signal from the other side. If we look even closer at just one side, so again, this is that block diagram going further into just the left side. Taking a closer look at the signal, you can see that there's three essential components. We have the binaural beam former, we have the binaural noise reduction taking place, and we have a head movement compensation. This is so that when you're in an active conversation with someone, often the speaker and the listener move around a little bit, and so we wanna make sure that the target speech can't accidentally drop outside of the enhanced range. If we take a look at what binaural beam forming is able to accomplish, if you look at the screen on the right, you will see three figures. The first figure is showing you the characteristics of a monaural directional pattern. So the hearing aid wearer is wearing the orange cap, the target speech that they wanna listen to is the person with the green cap, and then there's some additional speech coming from the side with the person shown in the red cap. When we utilize binaural beam forming, we can tighten that beam so that the interfering speech is actually outside of the area of focus. But if you look at the third graphic, you can also see that we not only narrow the area of focus, but we actually enhance the speech that's coming from within the area of focus. So narrow directionality gives the hearing impaired wearer the perception that he's focusing on the person he is directly looking at, like a magnifying glass. It should be noted that the adaptation of binaural noise reduction gain is fast enough within milliseconds to rapidly amplify or attenuate based on the acoustic situation, and of course, without any background noise increase. So it's a very complex algorithm that we use to determine when we're going to utilize narrow directionality within the hearing instruments. For quiet, of course, no directionality is used, and in most hearing instruments, regardless of the manufacturer, in quiet situations, you tend to be in an omnidirectional mode. For lower levels of noise, we'll utilize monaural directionality, and that happens automatically in all of the instruments that are offered by Siemens, regardless of technology range. As the noise level increases, narrow directionality can engage in hearing instruments that have this capability. In other words, for Siemens instruments, it's our three, five, and seven products on the Binax platform, and its effects can be increased accordingly until it reaches full directionality for high noise levels. Now there is classification that we utilize at Siemens that takes place in the environment. So remember that we like to classify the environments the way in what your patient is experiencing to know how to drive that hearing aid to interact. And so classification of the acoustic environment is going to be initiated. If you're listening to very loud music, the music classification is going to take precedence. If there is speech when there's music, the speech and noise classification is going to take precedence, and therefore you may utilize narrow directionality if the music is loud enough. And further, in Siemens instruments, the signal-to-noise ratio of each frequency is considered, and this really helps us to preserve spatial cues and spatial awareness of the patient, even when they're in that binaural beamforming algorithm. What you see here is an example of frequency-dependent activation of binaural narrow directionality. So the directionality is fully enacted in the lower frequencies. So the lower frequencies in this case must have a worse signal-to-noise ratio, so we have a much fuller directional pattern. In the mid frequencies, we have some directionality happening, and in the very high frequencies, we're in a monaural directional mode, so not in a narrow directionality at all. And this really helps our patients achieve the best hearing in noisy environments when speech is the noise, but still maintaining enough awareness that if someone was to call their name from the other side of the room or from behind them, even when they're focusing on the target speech, they're still aware of what's happening around them, and they're able to separate that from the target speech. We've done some research in this area on the advancement that this does for your patient in those noisy environments. So we did two clinical studies with narrow directionality. We looked at comparing patients with hearing loss wearing Binax hearing aids to patients with normal hearing. So you can see here that we had two groups, the normal hearing group with a mean age of 58.1 years, and the hearing-impaired group, mild to moderate sloping hearing losses with a mean age of 65.8 years. In the U.S. study, which was done at the University of Northern Colorado, they utilized hint sentences, and the competing signal was also hint sentences plus speech babble at a level of 72 dB SPL. The instruments used were Pure7 Binax, which is a RIC device with double domes, keeping in mind that utilizing a closed fitting is always going to yield better directionality, but that doesn't mean that your patient will not experience a good directionality improvement even in an open fit. For research purposes, we wanted to get the maximum improvement possible, so we utilized double domes. The first fit algorithm was used for Binax, and the default parameters were kept within the hearing instruments, so we didn't maximize any features any more than what you would do if you just first fit the instruments for your patient. What we found in the two clinical studies was that Binax was able to provide better than normal hearing and demanding environments. What that means is that our patients with the hearing loss, wearing the hearing aids, had a 2.9 dB improvement in speech reception threshold over their normal hearing counterparts. Now, there was a slight difference in the outcome for the German study, which was done in Olinburg and the University of Northern Colorado, and that is attributed to the differences in the testing types. The ULSA was used with German words, obviously, for the German study, and the HINT was used for the University of Northern Colorado study. Both of the tests are scored slightly different. But in looking at both clinical studies together, we have up to a 25% improvement in speech understanding in noisy environments. It's important to remember that for the manufacturers that utilize a binaural beamforming algorithm, it can be active in the universal program, at least it is with the Siemens instruments. Again, we promote that spatial awareness, and we have a really good energy efficiency within the Siemens instruments so that we can still provide you with the rechargeability as an option. The value of narrow directionality for your patients is that it's so effective that clinical studies have shown hearing-impaired listeners using the technology can hear speech better than normal hearing individuals in demanding environments. Now, binaural beamforming can be utilized in more than just the forward-facing direction, and there are some times when your patient needs to focus on speech, and that speech may not be coming from in front of them. So for that, Siemens' answer is spatial speech focus. This is utilizing advanced beamforming in 360 degrees. You can now have true directionality to all four directions while preserving your spatial cues. It's very useful if the wearer is in a situation where they can't turn their head, but they need to hear the speech from another area. And we can automatically adapt this focus based on the speech of greatest intensity. So both interaural time difference and interaural level differences are utilized to create the beamforming algorithm just as a front-back microphone do in a standard directional set of hearing aids. But because we have that exchange of information, we can now do a direction to the right and to the left. We utilize a Wiener filter-based approach to suppress signals coming from the undesired area, and that attenuation in the non-focused area can be as much as 10 dB. Let's take a look at what we're talking about as far as utilizing directional patterns in different channels. So one of the things that the newer hearing aids on the market have now is that we operate in a lot more channels than ever before. For example, in Siemens instruments, we have 48 channels in our seven-level products, which is our premium product. We have 32 channels in our five, which is our advanced product, and 24 channels in our three, which is our standard product for the Binax platform. And what you're seeing in figure eight is that we can actually use a different polar plot for each of those channels. So what you're seeing there in figure eight is the 500 Hz polar plot in red compared to the 2,000 Hz polar plot at the same time in the hearing instruments in green. Further, you can see in figure nine that the interaural level difference, this is showing just 2,000 Hz interaural level difference for omni-directional compared to a left-focus mode. And what you'll notice is that when we're in the left-focus mode, the interaural level difference is maintained for the right ear, although it's attenuated. So the spatial cues are there, but the attenuation is there so that you can focus more easily on the signal coming from the left side. We have done independent research again on the advanced beamforming in 360 degrees at the University of Iowa, and that study is available for you. It was published this year in the Hearing Review. In this research, the participants were surrounded by background noise, and the target speech signal was connected sentence test nine, and it was presented from either 90 degrees azimuth or 180 degrees from where the patient was situated in the direction the patient was looking. The testing was conducted with the spatial speech focus algorithm on, so when it was activated, and the hearing instruments were looking for the speech of greatest intensity versus having it off. The findings revealed that a significant benefit with the algorithm on was seen for the target speech in both locations with an average improvement of up to 22%. So spatial speech focus is understanding from all directions. For Siemens, because we operate on a classification base, we can classify the car or automobile when it's moving as a separate environment. And therefore, we can seamlessly switch the instruments from the narrow beam function, or when the hearing aid is changing itself as far as the span, to a spatial speech focus function where the hearing instrument's looking for the speech of greatest intensity. This'll happen automatically when they get in and out of a car. For patients that need to utilize this type of algorithm outside of a car, you can give them an independent program called Stroll. And that means the hearing aid will always be searching for the speech of greatest intensity and change the focus, at least in the seven level, to front, back, or hypercardioid, anticardioid, and true right and left directional. We do have situation-based user control available, and I know that other manufacturers that have similar products also have situation-based user control. So let's take a look at what that looks like when you're using that for a patient. So at Siemens, our situation-based user control can be done utilizing our apps. We have two apps, the EasyTech app and the Touch Control app. And what the patient can do is they can choose what direction they would like to focus. Rather than having the hearing aids choose the direction in the span, the wearer can manually override the hearing instruments, focusing only on the right, only on the left, on the front, or on the back. Another user-based wearer control can be done with Siemens on board the instrument for patients that don't have smartphones or maybe don't want to use devices to control their hearing instruments. They're just going to use the on-board control. And so what you can see here is with the on-board control, you can narrow the beam of focus for the patient or you can widen it so they have more ambient awareness. One final algorithm I want to point out, we know that there are a couple of manufacturers that do something somewhat similar for windscreen and audibility in wind. What Siemens is doing for audibility in wind is that we utilize the exchange of auditory information so that if there is turbulence blowing over the microphone and that turbulence is affecting the sound quality, hearing aids will pull the gain down in the affected channels, but this can reduce audibility. Because we have the exchange of auditory information, we can swap the affected channels from the cleaner signal on the other side to maintain audibility even in very windy situations. Because we do it in this way for Siemens, we preserve battery life. We can have this algorithm in the universal program so it just enacts when needed when the patient walks out into the wind, their audibility is maintained. When they go in a quiet environment, nothing's happening with the windscreen. So it's important to note that we have windscreen in all of our instruments, keeping your patient comfortable in noise and reducing that turbulence, but we're preserving the audibility in our premium level products by doing the binaural auditory exchange. I did want to take a quick moment just to highlight the differences in when you have only one microphone on each side because the features that we've been talking about so far are possible only when we have two microphones on each hearing instrument. So that's what enables us to go right and left directional and to really get that very narrow beam form algorithm. When we look at instruments that have only one microphone on each side, we can still enhance the patient's understanding in background noise by applying our Wiener Filter technology. But in order to apply Wiener Filter technology to target speech, you need to be able to tell the hearing aids where the speech is coming from so that the targeted speech is appropriately left untouched or enhanced and the competing speech where the patient isn't focusing can be filtered out. And with the exchange of auditory information between two mic instruments, we are able to get binaural one mic directionality, which enhances the focus even more than the natural pinna effect that you get in wearing these very small instruments. So again, for patients that prefer to wear tiny CICs and IICs, we can take advantage of the advances in filtering technology within hearing aids and 48-channel directionality to help them focus on the speech coming from the front. This advancement was discussed in the hearing review in May of 2015. And one of the things that you'll find if you take a look at this article is that we discussed the directivity index that you can achieve with these instruments. So let's take a quick look at that. What are we talking about when we talk about directivity index? Well, it's a ratio that's derived by comparing the output of a hearing instrument for a signal presented at zero degree azimuth to the average output of signals presented from other surrounding areas. DI was a measure that we commonly used when directional hearing aids first came out, but then it was found that it really does make a difference in which channels you're using directionality because the different frequencies have more contribution to speech. So then was the advent of utilizing an articulation index weighted directivity index called an AIDI. The AIDI calculation gives greater weight to the frequencies that contribute more to speech. For example, the AIDI for 2,000 hertz is given more weight than the directivity index for 500 hertz. Now, because we're utilizing binaural beamforming in hearing instruments, we need to even look at a different way of measuring, which we call a sequential AIDI. Because traditional AIDIs only measure from one hearing instrument, but with binaural beamforming, we need the audio input from the other side, we still measure the output from one hearing instrument, but we allow the other hearing instrument to give its binaural processing information to the instrument being measured. So interpreting SDI and SAIDI measurements. This allows us to quantify the directional technology performance of modern adaptive hearing aids. It allows us to compare directional performance of different hearing aid styles and form factors within the same portfolio, but also allows for comparison across manufacturers with similar technology. So there are different manufacturers who are going to do beamforming in different ways, and you may want to look at who can produce the best AIDI measure. It provides a relative prediction of signal-to-noise ratio benefit on different products. So what you're looking at here is the SAIDI for a one-mic directional product, and this is showing you omnidirectional mode in a CIC compared to the enhancement, the 5.1 dB enhancement that you get when you're in the binaural directional mode utilizing that same CIC. In this slide, we're looking at the binaural, or the SAIDI of a RIC product compared to an ITC. And so what you can see, the blue line, again, is showing you omnidirectional mode. The red line is showing you monaural directional, because remember, we have two microphones on each of these hearing aids, so we have three things to compare. And the green line is showing you the additional benefit that you get from the narrow directionality effect. What's interesting to note is that even though both instruments have two microphones on each side, the RIC and the ITC, the ITC does edge out the RIC just a little bit because in the ITC, you also get the person's natural pinna effect. So that really completes the review that I have for you today on E2E Wireless 3.0. I will turn things over to Ted again, and we'll await any questions that you may have. Great, thanks, Leanne. Dan, we're so excited that we've had over 100 of your fellow colleagues that have joined us today on this webinar. As Leanne said, we do have some time for questions. If you have a question for Leanne, please enter it in the question box on your webinar dashboard. Leanne, our first question comes from Renee, and Renee asks, can two hearing aid users use their Bluetooth hearing aids to stream from the same TV at the same time? Now remember that Bluetooth is one-to-one, so the way, I can only speak for the way our Siemens devices use, but I think you can transfer it over to other Bluetooth. If you have only one transmitter, because Bluetooth is one-to-one, you're not going to be able to have two hearing aid users utilizing the same streamer and split that Bluetooth signal. Now what you could do, though, is if you utilize a transmitter, you can actually get a splitting device where you split it before it goes to the transmitter so that your audio out, you utilize a separate set of cords so that you can split the audio out and use two transmitters, because each transmitter has to stream to its own device. Great, thank you, Leanne. Our next question comes from Anjan, and they ask, is binaural processing likely to be compromised by narrow directionality? I'm not sure I quite fully understand the question. Is binaural processing likely to be compromised by narrow directionality? If you're looking at natural binaural processing of the individual, we know that with hearing loss, our binaural redundancy, our binaural squelch, we know that those systems are already compromised in the hearing impaired. And what we're trying to do with narrow directionality is improve the signal-to-noise ratio to overcome that lack or that impairment in the patient's natural binaural processing ability. You can see the reduction in binaural processing ability in hearing impaired wearers when you look at the fact that people with hearing loss do markedly worse in noisy situations than they do in quiet. So it's not a one-to-one comparison. If you have this much hearing loss, you struggle in quiet and you struggle in noise. You struggle a lot more in noise, and that's because of that binaural processing ability being compromised within the individual already. So narrow directionality is going to actually help that situation by improving the signal-to-noise ratio. I hope that answers that question. Thanks, Leanne. Leanne, our next question comes from Mike, and Mike asks, how does narrow directionality in SSF promote spatial awareness? So the way that we're maintaining the spatial awareness is that we only have to go directional in the channels that have a poor signal-to-noise ratio. So if you put all of the channels into a particular area, so focusing on the left, and they're all directional at the same rate, then your patient is not going to be able to perceive where other sounds are coming from in the environment in the same way. So again, because we're processing in each one of the channels separately, if you flip back in your handout to where we had figure eight and figure nine, and it showed the left focus, but that we were preserving the integrity of the right side, we're attenuating it, but we weren't reshaping that right side. That's what really helps us to preserve that spatial awareness. Great, thanks, Leanne. Leanne, our next question comes from Susan, and Susan asks, why do I need directional microphone technology in a CIC when the patient has the pinna effect? Well, that's a great question, because we have learned, and we know that directional hearing aids came about originally because we had moved the microphone out of its natural place. And whether you believe in evolution or divine intervention, our ears are shaped this way for speech. Humans are shaped to hear speech, and we have emphasis in certain areas, and the shape of our ears gives us a natural directionality to the front. But today's filtering technologies have surpassed what the shape of the ear can do by allowing us to filter out competing speech if it's outside of that area. So remember, when you look at that picture with the one-mic directionality, it's because of the exchange of information that we can establish where zero-degree asthma is. And we need to be able to establish where the front speech is coming from in order to know which speech it's okay to filter out. And that's the advantage that we get with directionality in a CIC. So you can edge even more, do a little bit more in filtering for your patients in those really demanding environments. Great, thank you, Leanne. Leanne, our next question comes from Steven, and Steven asks, how is it that binaural features do not impact battery drain? Well, the way we do things at Siemens, I mean, binaural features do impact battery drain. So actually, if you look in your handouts to the very first slide where it says 10 years of ear-to-ear wireless, what you see in the bottom of that slide is you see a little darker gray shade that shows the impact of battery drain for wireless features and how it has increased over the years. But what we needed to do at Siemens was we needed to make that impact negligible. So we really wanted to still offer your patients seven to 10 days on a 312 battery, two weeks on a size 13 battery, and give them rechargeable. So we've worked really hard to engineer in that our binaural features are only enacted when necessary. They turn themselves off when they're not being used. And so it's very dependent on the situations you're in. And therefore, we've been able to really control that battery drain. So in our hearing aids, in typical use, you will see that the drain increase is from 1.29 in omnidirectional mode milliamps to 1.6, which is really manageable as far as the hearing aid batteries. And different manufacturers look at it differently and they utilize different technologies. So that is not universal across all manufacturers. Other manufacturers will have different amounts of battery drain associated with their wireless. Great, thank you, Leanne. Leanne, our next question comes from Linda. And Linda asks, would you recommend setting up the rocker switch for control over narrow directionality? So being able to manually control narrow directionality is really key for certain patients. You know, I find that there are some patients that need to focus more, even in lower levels of noise. And the way we have our binaural beamforming algorithm set up to focus on the front is that it's only going to enact that when you're in a particularly difficult environment. But what one person perceives as particularly difficult might be different from another. And so if your patient has auditory processing issues, if they have traumatic brain injury, early onset dementia, or any of those things, they might be able to benefit from being put, or putting themselves into that narrow beamform algorithm manually. The other advantage that the patient can have when they manually do it is they can actually force all of the channels to go narrow. So think about your patient with auditory processing. They're trying to hear in auditory processing difficulties. They're trying to hear in a meeting, maybe they're in a conference room, and they're trying to focus on the speech of the speaker in the front, and their work colleagues are whispering, shuffling papers, kind of playing around on the table, and those sounds are distracting. If they use the rocker switch to manually override and force the hearing aid into the narrow directionality, they'll see a significant difference in the noise in the room outside of the area that they're looking at. Thank you, Leanne. Leanne, our last question comes from Jane, and Jane asks, directional mics have the ability to pick up voice over music and enhance the voice. This is a situation where sounds are coming in from different locations. What about differentiating the voice from music coming from one location, such as a television? Will the voice be enhanced, and the background noise decreased? So, and that's an excellent question. You know, it's really going to depend on the signal-to-noise ratio of the broadcast itself, and that's going to be different for different hearing aid manufacturers. You know, oftentimes, the hardest thing to hear these days is TV, because we really don't have control over their broadcast levels and what signal-to-noise ratio they are producing, and so if the TV itself is giving you music that is significantly louder than the signal that's transferred, it's very possible that the hearing instrument will determine that music is the dominant source. However, the way we have our classification set up, the hierarchy always goes to speech, so if there's speech with music, speech and noise is the classification that's utilized, and so we will try to give the patient the best understanding of the speech, rather than when it's just music, we'll use the omnidirectional mode, and the noise reduction is different. So we're trying to take those things into account, but again, a lot of that relies on the transmission of the signal itself. Great, thank you, Leanne. Leanne, I'd like to thank you for an excellent presentation today, and I'd like to thank everyone for joining us today on the IHS webinar, Advanced Wireless Processing for Enhanced Binaural Hearing. If you'd like to get in contact with Leanne, you may email her at leanne.powers at savantos.com. For more information about receiving a continuing education credit for this webinar, visit the IHS website at ihsinfo.org. Click on the webinar banner, or find more information on the webinar tab on the navigation menu. IHS members receive a substantial discount on CE credits, so if you're not already an IHS member, you will find more information at ihsinfo.org. Please keep an eye out for the feedback survey you'll receive tomorrow via email. We ask that you take just a moment to answer a few brief questions about the quality of today's presentation.
Video Summary
The webinar, "Advanced Wireless Processing for Enhanced Binaural Hearing," introduces the concept of wireless technology in hearing aids and its benefits. The presenter, Leanne Powers, discusses wireless communication between hearing aids using near-field magnetic induction and Bluetooth technology. She explains the features of wireless streaming and programming, as well as the benefits of ear-to-ear communication. Leanne also explores the advancements in wireless technology, such as binaural processing and beamforming algorithms, and how they improve speech understanding and reduce background noise in challenging listening environments. She emphasizes the importance of battery life and energy efficiency in wireless hearing aids. Overall, the webinar provides an overview of wireless technology and its applications in improving binaural hearing.
Keywords
webinar
wireless technology
hearing aids
Bluetooth technology
ear-to-ear communication
binaural processing
beamforming algorithms
speech understanding
battery life
×
Please select your language
1
English