false
Catalog
Fitting Methods: Origins and Evolution
Fitting Methods: Origins and Evolution
Fitting Methods: Origins and Evolution
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hey everyone, welcome to the webinar on fitting methods, origins, and evolution of modern practices. We are so glad you could all be here today to learn about the evolution of fitting methods to those most commonly used today, and why positioning aided speech inside one's dynamic range is really the bottom line. Your moderators for today are me, Fran Zinssen, IHS marketing and membership manager, and Joy Wilkins, Director of Professional Development. Our expert presenter today is Ted Venema, PhD. Ted is an audiologist in private practice with NextGen Hearing, Inc. in Victoria, British Columbia. In addition to his many years as a practicing audiologist, Ted is also a seasoned professor, having taught at several colleges and universities throughout Canada. In fact, Ted created Canada's fourth hearing instrument practitioner program while at Conestoga College in Kitchener, Ontario. He is a passionate speaker and he continues to give presentations on hearing, hearing loss, and hearing aids across North America and beyond. Ted is also the author of the textbook, Compression for Clinicians, now in its second edition. We are very excited to have Ted as our presenter today. But before we get started, we just have a few housekeeping items to go through. Please note that we are recording today's presentation so that we may offer it on demand through the IHS website in the future. This webinar is available for one continuing education credit through the International Hearing Society. You can find out more about continuing education credits at our website at IHSinfo.org. Click on the webinar banner on the homepage or choose webinars from the professional development menu on the left side of the page. There you'll find info on this webinar and the CE quiz, the CECs, and how to submit your quiz to IHS. We are packing a lot of information into a short period of time today, therefore we have uploaded the webinar slides to the IHS website so that you can follow along and review the content after the presentation is over. Registrants were sent the handouts yesterday morning, but if you haven't gotten yours yet, you can go to IHSinfo.org now, click on that webinar banner, and download the handouts in PDF format to your desktop PC. Tomorrow you will receive an email with a link to a survey on this webinar. It is very brief and your feedback is essential in helping us create valuable content for you moving forward. So today we're going to go through the following topics in a 60-minute presentation. The half-gain rule, the spinal cord fitting method, linear fitting methods, compression fitting methods, and DSL versus NAO. At the end we're going to move on to a Q&A session, and you can send us a question for Ted at any time by entering your question in the question box on your webinar dashboard, which is usually located to the right of your webinar screen. We will take as many questions as we can in the time we have available. Now I'm going to turn it over to Ted, who will guide you through today's presentation. Ted? Hello, hello. Hey, how are you? I'm going to tell you a whole bunch of lies about fitting methods, so come on with me and let's have some fun. Let's talk about fitting methods, because in hearing aids, they are many, and how come there are so many? The reason the fitting methods all emerged is really looking, because the ear isn't like the eyeball, really. When you get a better idea of who you're not, you have a better idea of who you are. Look at the eye here on the left. Most glasses refocus light properly on the back of the eye. If you see my cursor here, the back of the eye, called the retina, that retina changes light into electricity, and electricity is the language that the brain understands. So look at the white bars here. If light isn't properly focused on that retina, glasses help to refocus light upon the intact. That's why glasses usually work quite well for the eyeball. Most vision loss is actually conductive vision loss. Now when you move to the ear, look at the cochlea, this unrolled cochlea on the right. That's in quotes like the retina of the ear. That's where the hair cells are changing sound into electricity, and electricity is the language the brain understands. In 95% of people, it's the cochlear hair cells that are the cause of hearing loss. That's why fitting methods abound, because you've got so many different fitting methods due to the fact that most hearing loss is sensorineural hearing loss. If hearing loss was conductive, we probably would just have one fitting method. We'd have the fitting method. Think of when you've gone to the optometrist or optician. Ka-choong, they throw something in front of your eye. How does that look? Oh, kind of blurry. Ka-choong, another one. Oh, that's blurry again. Okay, back to the first one. To the middle one. Dung! How does that look? Good! Healed! Jesus! You just pick out your frames. It's a different situation with hearing aids. Look at this slide here. You cannot mirror the audiogram with full-on game. Can you do it? Nope. Look at the audiogram here on the left. They should call it an audiogram, they should spell it O-D-D-I-O-G-R-A-M, because it's the oddest graph in the world. All the numbers increase going downwards. That's a little insane in my opinion, so let's go to the right. All I've done is flip things right-side up to help make things more clear. Look at this right-side opogram on the right. The zero decibels is on the bottom. The guy's hearing loss in red is on the right here, rising. And then the loudness, UCL, or ceiling of loudness tolerance, is on the top. Now if you look at this bottom white dashed line, five decibel sounds, can you amplify those by the full degree of the person's hearing loss? Sure. You can lift it right up where it belongs, over here. And now it's right above the guy's threshold. That's all fine and dandy. Those kind of sounds you can amplify by the full degree of the loss. But if the input is speech, which is 50 to 60 dB HL, or 60 to 70 dB SPL, if that input comes into a hearing aids mic, you cannot amplify it by the full range of the loss, because you'll blast the person to kingdom come. The output will exceed loudness discomfort level. And that's where the half gain rule came in. This is why, with sensory neural loss, the ceiling of loudness tolerance hasn't changed from normal. The floor is raised. Here's an audiogram on the left, and this is sounds in SPL on the right. Look at this bottom bluish green line over here, this curve. That really is 0 dB HL. Notice the vertical decibels increasing on the left here. These are dB SPL. So really, in dB SPL, this is your minimal audible pressure curve. This is really the softness it takes for a human to hear in dB SPL. All we do is flatten that line to make it 0 dB HL on the audiogram on the left. Anyway, note that this guy with the hearing loss, his floor is raised in the high frequencies, but his loudness tolerance, the loudness he can tolerate, hasn't changed. You can see in the same thing shown yet another way. Here's a normal dynamic range on the left, from 0 to 20 or 40 dB soft sounds, average sounds from 40 to 80, and loud sounds above that. When you've got sensorineural hearing loss, the floor is elevated, and yet the loudness discomfort level hasn't changed. So the dynamic range with sensorineural loss on the right is squished. It's smaller. And note, it didn't get squished from the ceiling and the floor. It's just the floor that got raised. Carhartt in 1946 came up with the first fitting method, a comparison fitting method. He tried several different hearing aids on the same person to see which one was best for speech recognition. And he also wanted to find out, well, which one did the hearing aid, did the person like the best? Now, remember, the hearing aids in those days were probably body aids. At any rate, there was no prescription target, per se. This really relied on the individual skill of the clinician. But it's hard to transmit these methods or convey these methods from clinic to clinic. And hence, linear fitting methods arose. Now, I'm going to take you back to Leibarger, because this man, rest his soul, he recently passed away, but he came up with the half-gain rule, and it has a lot to do with that reduced dynamic range that we were talking about earlier. Linear hearing aid circuits were the state-of-the-art, same gain for all input levels. You cannot mirror the audiogram with full-on linear gain, because the linear gain gives too much output at high input levels. In other words, the speech coming in, you cannot amplify by the full degree of the loss. Leibarger found people preferred half-gain. And what he did was he'd stand a Texas yard in front of a person, and let's say the person had a 50 dB sensory neural loss, his intuition at first led him to think, okay, I've got to amplify by 50 dB. So he'd amplify by 50 dB at full-on volume, and he'd stand a yard away from the guy, and he'd say, no, I'm talking to you at a comfortable loudness level. You turn your volume to where my voice sounds most comfortable. And then he'd look back, and he'd see invariably they'd reduce the volume by half. And that's where he came up with the half-gain rule. And the half-gain rule is still the spinal cord of all fitting methods. It really is, whether they say it or not, we know we cannot amplify by the full degree of the loss, because most hearing loss is sensory neural. And with sensory neural loss, you've got recruitment. And that means loudness tolerance hasn't changed. The bottom line here is the most incredible to me. No method has been proven to be the best for speech intelligibility and noise. It's largely a matter of which religion do you subscribe to. I mean, there's two factors here. You've got linear hearing aids, you know, that was the day. And if you look at the graph here, this is a 60 dB linear hearing aid. Zero in, 60 out, 20 in, 80 out, 40 in, 100 out. The gain is linear up until the maximum power output would be reached. And then they used something called peak clipping to limit the output so that you cannot amplify any further, but then you've got a lot of distortion. Peak clipping makes the hearing aid sound like I'm crawling up on the bottom of a pole. At any rate, let's move on, because here you've got to take a look at the next issue, reduced dynamic range. And this is what we were talking about earlier. The red line shows normal loudness growth. 10 to 20 is perceived as very soft. 50 to 60 is perceived as comfortable. 90 to 100 is perceived as too loud. Sensory neural loss is the light blue line. Now 50 to 60 dB is perceived as very soft, and yet 90 to 100 is too loud. So there's the narrow dynamic range. Hearing aids need to amplify soft sounds by a lot and loud sounds by little or nothing at all. And that's what compression is all about. And that's why wide dynamic range compression came into the picture in the early 1990s. At any rate, the crucible or birthplace of fitting methods came from these two elements. Linear hearing aids were the state of the art. You have a reduced dynamic range with sensory neural hearing loss, and you add those together and what you come up with is basically a half-gain fitting method. And that's the reason you've got that. It's because of 1 plus 2 equals 3. That's where it was born. If you remember the good old functional gain, some of us with emerging gray hairs can. This is the way hearing aids used to be fit. Look at the audiogram here. Typical mild to moderate hearing loss, and you see these little letter A's. When I first saw those, I kept thinking adulterous, adulterous, adulterous. At any rate, they mean aided thresholds, aided thresholds, aided thresholds. And look what they're doing in a sound field. First of all, you measured the guy's thresholds under headphones, came up with these thresholds for the right ear, let's say, and then you put the person in a sound field and out of speakers with a warble tone and the person's hearing aid in place sitting there. You measured the thresholds again, aided. And you came, you hoped that you'd raise the guy's thresholds by half. Because if you did that, you were giving half gain. That meant that with speech coming into the hearing aid, you were putting aided speech outputs nicely in the guy's dynamic range. Above, well, on the audiogram, below is threshold, and yet not quite reaching loudness discomfort levels for speech. So the reason you did this with these little letter A's to get those aided thresholds was in order to get this, aided speech output. One last thing, notice on the left, you're not getting half gain at 250 and 500, reason to kind of reduce a bit of the upward spread of masking. So the lows were always given a little less than half gain. At any rate, then came real ear, around the late 80s, early 90s, real ear came out big time with insertion gain. Remember, this was called functional gain. When you're measuring it behaviorally, the guy's raising his hand with headphones on, and then raising his hand in a sound field. This is non-behavioral, insertion gain. You stuck a tube in the guy's ear canal, open ear canal, measured his unaided ear canal resonance, then put the hearing aid on, on top of the tube in his ear canal, with the same inputs, usually 55 dB SPL, you saw the real ear aided response. And then real ear aided in light blue minus real ear unaided in orange equals real ear insertion gain in black. And you hoped that that hits your half gain target in red. So essentially, it's not the rules or the fitting methods that changed here. What changed is the method of measurement. With insertion gain, you were doing something non-behaviorally. It was fast. It was quicker. You know, candy's dandy, but liquor's quicker. So I mean, hey, this just sped things up a bit. Now, in the early days of real ear measurement, we looked a lot at outer ear canal resonance. You looked at the natural amplification of soft, high-pitched consonants, which varies a lot among individuals. This real ear unaided response is typical to this orange over here. By the way, that's why your ear has the shape it does. With this weird concha and the helix and antihelix and the tragus and the ear canal being an inch long or two and a half centimeters, as we say in Canada, the reason you've got that weird shaped outer ear is because naturally, like a wine glass, it resonates with the high-frequency consonants. And the high-frequency consonants are soft. So we've got a big delight in every bite between 1,000 and 4,000 hertz. Reason that's nature's way of amplifying soft, high-pitched consonants. If we didn't speak, we wouldn't have ears shaped like we do. We'd have dogs' or cats' ears, but I digress. Move on, Teddy. Four kids were born from the mother, the half-gain fitting method. Four different targets emerged for the same hearing loss. All of these are variations on the half-gain rule. The first one that came up was the Berger method out in 1979 from Kent State University. If you will remember the song, four dead in Ohio. That's a long time ago. Shows my age. Anyway, by the way, did you know that that person, one of those four killed, was a speech language pathologist student? I just thought it's true. I'm not lying. Rest your soul. Anyway, Berger, half-gain method. Look at what it's asking at 250 hertz, a little less than half-gain. At 500 hertz, half-gain. At 1,000 hertz, a little more than half-gain. 2,000 hertz, even more than half-gain. 3,000, slightly more, and 4,000 hertz, half-gain, and then also a bit of reserve gain. He would say if you can raise the volume up, you can get an additional 10 dB gain. These numbers were based on optimal speech intelligibility, the articulation index. If you look at this articulation index, you'll see the speech banana. In the speech banana are 100 dots. Now, each dot represents 1% of audible cues required in order to understand what the word was. Notice that most of the dots are clustered around 2,000 hertz. That's why Berger favored most gain at 2,000 hertz. That was the real linchpin for him, because according to the articulation index, 2 grand was the most important speech for speech intelligibility. That was his philosophy. Next came POGO by McCandless, 1983, prescription of gain and output. This asked for lots of high-frequency gain as well, just like Berger did, but a little bit less mid- and low-frequency gain, a little bit less than half-gain for the lows to prevent the upward spread of masking. Basically, his method was half-gain at all frequencies, 10 dB less than half-gain at 250, and 5 dB less than half-gain at 500. Again, a reserve gain at 10 dB. Next came Libby's on the label-label, like Libby's beans, one-third to two-thirds gain procedure, similar to POGO, except that for mild to moderately severe loss, you're giving about a third gain instead of a half-gain, because according to Libby, he found that clients didn't really prefer a half-gain. Most people with mild to moderate sensory-neural loss wanted a little less than that, so he came up with the third-gain rule, and 5 dB less than a third gain at 250 Hz, 3 dB less than a third gain at 500 Hz. However, there's a wrinkle for severe to profound hearing loss. These people, he found, wanted more than half-gain. He found that they wanted more like two-thirds gain. Then came NAL, out of Australia, National Acoustics Labs, by Byrne and Tonneson, 1976. Now it's interesting, but I find that the two most common fitting methods in the world, NAL and DSL, come from those pinko-commy left-wing countries, Australia and Canada. How do you like that? Now, it's theoretical goal. Look at the letters on the audiogram. Notice how the vowels and the voiced consonants, z, j, m, d, and all the vowels are louder and lower than the high-frequency consonants. This is how the normal hearing person hears speech. These guys thought, that's okay, we don't care. When we aid speech, we're going to make all adjacent speech frequencies sound equally loud to the person. Theoretical goal, make all speech frequencies contribute equally to the overall loudness of speech. So, NAL does not try to preserve the normal loudness relationships among adjacent speech frequencies. Instead, it tries to make all speech frequencies contribute equally to the overall loudness of speech. So, then they revised it in 1986, so it became NAL-R, because although the goal of the original now was to make all speech frequencies contribute equally, subsequent testing showed that this goal wasn't quite achieved. Amplified speech frequencies were not actually equally loud at MCL. They found original now compensated a little bit too much for the audiogram slope. It gave a little bit too much high frequency gain as well for precipitous hearing loss. Basically, look at this slide. This would be for someone with completely normal hearing, 0 dBHL thresholds. And now it's saying, what would we have to do to speech frequencies to make them sound equally loud to this person? The original now was the white. They would say, oh, you've got to really cut out the lows by about 15 to 20 dB here, okay, and a bit from the highs. When they revised it some years later, they thought, well, you did a little bit of overkill here now. You don't have to reduce the lows by quite as much, and you don't have to reduce the highs by quite as much in the speech banana in order to make all speech frequencies equally loud. So basically what would this mean is that the original now was probably giving a little bit more or a little bit less gain to the low frequencies than the revised method did. At any rate, the now R is asking for three gain calculations in its method. At each frequency in the person's hearing loss to come up with a target, this is what now R did. The pure tone average or parent-teacher's association, the pure tone average, you multiply that by 0.05, in other words, divide it by 20. That's one constant. And then you look at each threshold, and you multiply each threshold by 0.31 or basically divide it by three. And then at each frequency, you'd add a specific different value in order to make all the speech frequencies equally loud. So here's showing that number three. This is the values. Because of this bottom curve over here, this is what we need to do to speech in order to make all the frequencies sound equally loud. You've got to reduce the lows by about 17 at 250, 8 at 500, 750 at 3, blah, blah, blah. Okay? So basically now was doing these three things at each frequency in order to come up with its target. There was another additional wrinkle in now R in 1990, and this one shows some cahoots with the cochlear dead spot concept. Basically, there's a lot of verbiage here, but let's just look at what they did. They thought if you have a bunch of people with these white thresholds here, with basically a pretty bad hearing loss in the lows and then a really profound hearing loss in the highs, and then you have a whole group of people with a little bit more hearing loss in the lows, shown in yellow, but better hearing in the highs, shown in yellow. Now look at what the people in the white asterisks prefer. They tend to prefer more low-frequency gain, even though their thresholds are better in the lows, and they prefer less high-frequency gain, even though their thresholds are worse in the highs. How come? Because they're trying to make use of the hair cells that are surviving, and the highs may be cochlear dead. And so they're thinking, shoot, you know, you can't bring blood from a stone body. Don't give them the highs. They don't want the highs, you know? The people in the yellow here, they want more high because they've got better hair cells there. They've got more to use. So the NALRP was a little bit of a wrinkle. Otherwise, it's identical to NALR. Okay, but now look at this comparison to all these fitting methods. Here you've got a hearing loss, mild to moderate, and look at the variations among the targets for this hearing loss. This is why, to take you back to the beginning, this is why we've got a plurality of fitting methods. We don't have the fitting method singular like optometry does. Optometry is quite exact. When you know who you're not, you get a better idea of who you are. This is why we have fitting methods plural, because the nature of damage to the end organ, namely the cochlea, it's sensory neural, and the cochlea is the retina, in quotes, of the ear. And therefore, you have a difference in philosophy of fitting methods, a difference of religion, you could almost say, a difference in faith. It's a difference in belief. I can't emphasize that enough. All the methods would say they're the best. There's no objective, really, reality here, I find. Now you've got two compression-based fitting methods that really stand out among all the others, and it's DSL versus NAL. NAL, NL1, and DSL version 5. Let's talk about a tale of two cities. It was the best of times, it was the worst of times. Let's look at it. The game changer was DSL, Desired Sensation Level, out of the University of Western Ontario, Richard Seawald. And this person, he was actually an audiologist in Nova Scotia, Canada, and testing a lot of babies and infants, and he was working with insertion gain at the time, and he thought, gosh, if you can see how your real ear insertion gain matches your target, big hairy deal. What does that mean to the parent or caregiver or teacher who asks, hey, how is this kid hearing speech aided compared to unaided? What does hitting a target with like some NAL target or some half-gain target with real ear insertion gain on real ear, what does that tell you about the aided audibility of speech? Nothing. DSL also took a dim view of gain and real ear unaided response, real ear insertion gain, because these thresholds are obtained with headphones on, circumoral or inserts, and when you're doing that, you're bypassing real ear unaided response anyway. So if real ear insertion gain targets are based on thresholds that didn't incorporate one's real ear unaided, open ear unaided response in the first place, then why the Sam Hill are you using it when you're trying to fit a hearing aid? So if real ear insertion gain is to be used, then your unaided threshold should have been obtained in a sound field, because at least then you're incorporating one's real ear unaided response. I mean, DSL totally changed the game. It changed real ear measures, too, and brought us to where we are today, the SPLogram, in situ output. We don't measure real ear aided response minus real ear unaided. This is a typo here. This shouldn't be R-E-U-R. We no longer do it. The SPLogram method is actually easier than the old method is. You've got just one measure, in situ, which is Latin for in place. You're putting the tube in the guy's ear canal, putting the hearing aid on top of the tube, measuring the aided output, period. Gain is just a means to an end. Output is king. Output rules. Input speech to the hearing aid plus the gain of the hearing aid equals the total decibels that are slamming into your eardrum. Output is the groceries delivered to your tympanic membrane. Talking about gain is me saying, hey, did you go to the store today to get bread? Yeah, how did you get there? Did you walk or did you drive or did you take a bus or did you fly? Who cares how you got there? Did you get the bread? Output is did you get to the store? Gain is a means to an end. Gain is yesterday's news. Look at this SPLogram. On the bottom, here you can see normal hearing in DBSPL, minimal audible pressure. Here you see your decibels increasing from zero up to 120 at the top. Here you see the person's hearing loss in red, the right ear threshold. Finally, the graph has entered the real world of all other graphs in the world where decibels or settings go up as you go, as you rise on the vertical axis. And up as you move to the right on the horizontal. At any rate, the asterisks are loudness discomfort levels or UCLs. And here you've got three aided outputs, the orange, yellow, and blue. Now the soft represents the output for soft input speech. And you want to lift that up where it belongs. You'll want to raise it above the threshold but barely. So that aided soft speech is barely audible. Well, to a normal hearing, soft speech is barely audible too. A average input speech of 65 DBSPL, make sure that's all above the thresholds, especially at 5, 1, 2, and 4. Especially at Matthew, Mark, Luke, and John. Make them all above the thresholds. And then loud input speech of 80, make darn sure that the output of that doesn't ever hit those asterisks. Then, like an old salmon, you can swim up the stream to die. You've done what you're supposed to do. Now looky, looky at what we've done. You know, DSL has raised speech into the auditory area. Long-term average speech spectrum can be displayed. And it basically refers to unaided speech. And audibility of unaided speech is the main DSL focus. It was meant for kids at first. The SPLogram shows unaided speech and unaided thresholds. And you can also see aided speech. It's a great counseling tool for parents and teachers. Finally, you can explain things in a way that people can understand. Unaided speech normally has a 30 dB dynamic range in and of itself. But we can't get into that at this time. It's just going to have to be the purpose of a future webinar, which I hope we can do, on real ear measurement. Hope to see you all again in May, perhaps. But that's mañana. Let's stick to the topic here. So unaided speech, the dotted lines, look at the SPLogram. It's a bit busy, but stay with me. You've got normal hearing on the bottom, minimal audible pressure. You've got the guy's thresholds here, the 0s or the 0s. Here's unaided speech splashed across, and you can see that this guy can hear the vowels, the low frequencies, but cannot hear the highs because they are below his hearing levels. And his loudness tolerance levels would be up here with the asterisks. So what you want to do is lift that up where it belongs. I know, keep my daytime job, Theodore. Raise the speech up above the guy's hearing levels and keep it nestled in the guy's dynamic range. Fighting cavities is the whole idea behind CREST. So, I mean, when you're looking at speech mapping, that's really what you're trying to do, is you've got a counseling tool, normal hearing, here's the guy's hearing loss. This is a slide out of AudioScan out in Dorchester, Ontario, another good old Canuck company, making real ear equipment. And then you're looking at some targets where you'd want to put speech, and that's what you're trying to do. So essentially, here's unaided speech, and you want to lift that and put it so that its mean or average, this line, is hugging or nestling these plus signs, essentially, so you can do unaided versus aided. Now, NALR has a different purpose, and NALNL1 did too, and so does NALNL2. Remember, DSL wants to make all of these sounds audible, and DSL largely kept or preserved the loudness relationships among adjacent speech frequencies. Of course, DSL offered more than one target because it's a compression-based fitting method, so it would offer a target for 50 dB inputs, 70 inputs, and 90 or 80 inputs. But they're a seawall, putting to death the old version of DSL4, and basically they've changed. It's now DSL5, but I'll get to that in just a minute. NALNL1 emerged in 1997, and that was a hearing aid fitting method intended for compression, so it's how NALNL or NALR emerged for compression. And recall the DSL goal, make all these sounds audible. Well, NAL does not try to preserve the loudness relationships among these speech frequencies. NAL strives to make all speech frequencies contribute equally to the overall loudness, so NAL philosophy is don't amplify the low ones by as much as DSL does. NALNL1 is a compression-based fitting method. It shows more than one target, just like DSL does. Compression gives different gain at different inputs, and this is how mainly NALNL1 differs from NALR. NALNL1 targets also end in thin air. This was dealing with the cochlear dead spot concept. Something can be audible but not understood. Just because it's audible, it may be physically measured above your thresholds, but what's the effect of audibility? What info can the person extract from the sound even though it's audible? So that's why NALNL1 versus NALR, let's compare the targets. Here, for some hearing loss, may be a target for NALR or NALRP. NALNL1 came up with three because it's a compression-based fitting method. And note it will have a different target for 50 inputs, a different target for 65 inputs, and another one for 80. And the gain is going down, down, down as the inputs go up. That's because of recruitment. And there's no real difference except the targeted targets end in thin air here because personally they couldn't care less about what would happen in the very highs where the hearing loss is usually the worst. Where the hearing loss is usually the worst is probably where the hair cells are half dead. So why amplify those anyway? But notice that the 65 dB target is largely the same as this one. You know, for an input at 1,000 Hz, you know, you're asking for 15, 1,000 Hz, 15, 2,000 Hz. You're asking for about 17, 2,000 Hz. You're asking for, okay, maybe 20 here, 4,000 Hz. You're asking for about 22, 4,000 Hz. You're asking for about 25. So slight differences but not a lot. At any rate, NAL-NL2 came up a few years later, and this came up with new data on how effective audibility actually varies across the frequencies. Note that these fitting methods keep evolving. They keep changing. There's always new wrinkles. Its overall loudness was found to be a bit too much with the NL1 fitting method. Subjects preferred a few dB less gain, especially for mid-input levels, and even more at high-input levels. And this was found especially for the mid frequencies. So look at the comparisons. NAL-NL2 tends to give a little bit more in the lows and highs than NL1 does. It's a bit of a flatter gain. Look at this. NAL-NL1 is the light lines. NL2 would be the dark lines. And these are going to be the outputs now. Or I'm going to say, no, no, I'm sorry, the insertion gain. So look at NL-NL1, the gain for soft inputs, for average inputs, for loud inputs. And then now look at the dark lines, NL2, you know, the gain for soft inputs and the average inputs. Notice it's less in the mids and loud inputs. It's less in the mids than NL1 but more in the lows and highs. Same for a flatter loss. NL2 is the black. NL1 is the green. And notice how they differ. They keep evolving, these methods. It's really quite amazing. I'm saying this to take you all home now. We're almost done here. Let's compare these new methods. Here's a loss on the left. The old DSL4 compared to NL1, the DSL4 asked for more in the lows, more in the highs. NL1 didn't care because of the cochlear dead spot concept. It's asking for a little bit less in the highs where the hearing loss would be the worst. And now DSL4 versus DSL5. This really morphed. I mean, my God, it's almost like NL1 met DSL4 in the back alley. And I think DSL4 got the crap beaten. I don't know. Something happened because look at this. We're going to do some comparisons here. DSL5 versus DSL5 for adult, kid versus adult. NL1 versus NL2 and then DSL5 versus NL2. Check this out. Here's a sloping loss. Now, look at this. This is going to be DSL5 for a kid, okay, the targets. This is unaided speech in the thick green here, LTAS, long-term average speech spectrum. Here's normal hearing on the bottom. Here's the guy's hearing loss, loudness, discomfort. The plus signs are the targets for DSL5 for a kid, okay, when you're looking at this. Now, if you're looking at DSL5 for an adult, notice how the targets go down, okay. Average inputs, speech, 65 dB SPL inputs. The targets have lessened. Kid, adult. Kid, adult. Okay, that makes sense. Kids haven't got as much linguistic or language ability, so they need all the sounds they can get. Remember, DSL emerged as a kid-based fitting method. So now look at where am I over here? I got screwed up here. DSL adult. Now look at NL, NL1. Check this out. NL1, again, unaided speech. Here's NL1. Here's DSL5 for an adult, NL1. DSL5, NL1. Notice how NL1 asks for a little bit less in the highs. NL1, a little bit less just to give the variation. A man on a flying horse can hardly tell the difference among these. Now look at NL2. NL2 versus NL1. NL2, NL1. NL2, NL1. NL1's asking a little more in the mids. NL2's asking for a little less in the mids. Don't get confused about this thick green stuff here. Remember, that's just unaided speech. These plus signs are saying where you want to put this unaided speech. I'm trying to come home to a point here. Here's DSL5, adult, for soft inputs in the pink, and then moderate inputs, average inputs in the green. Again, don't worry about this purple or pink stuff here or this magenta. Don't worry about that. Look here, adult. Here's DSL5 for an adult. Here's NL2 for an adult. Soft in the pink or magenta, average in the green. DSL5, NL2. Not a lot of difference. Not a lot. Probe tube measures and fitting methods are mainly NL1 or NL2 and DSL5. This should read NL2 and DSL5. Those are the two most popular ones today. Manufacturers also have proprietary fitting methods, and these tend to rule off more of the highs. Why? Well, probably to reduce a few returns for credit. I don't know. Hey, to cut out some of the highs, just to make it feel a bit better for the person, who knows? Software almost always overestimates the gain in output, paints a rosier picture. Real-year measures almost always show a bit less than predicted on the software, but again, we should have another webinar on this. We really ought to, because we can go over this again in detail, and I can showcase studies of this. It's a trip. But here's my final slide. I did a webinar in 2012, and I know this might freak some people out, but fitting methods for adults, I'll stick to adults, are they not becoming islands in the setting sun? Honestly. Because SPL-ograms in real-year and mapping speech is the bottom line for everyone. I'm loosely quoting Paul Simon here, but islands in the setting sun, faith is an island in the setting sun. Proof has become the bottom line for everyone. Well, I'll say fitting methods. You know what? We've got these SPL-ograms and in-situ output on our real-year. We are doing what the original founders were trying to do. They just didn't have the technology to do it. They just had functional gain. They had linear hearing aids. They didn't have real-year with SPL-ograms. Now that we've got real-year with SPL-ograms on them, we can see what it is that they were trying to do, map speech with the half-gain rule. That's why the half-gain rule popped up in the first place. The idea was to map speech into the dynamic range of the person. And really, now that we can visualize that or see that, you can just memorize this mnemonic right here. Average inputs, let the outputs hug the hearing thresholds. That means the outputs are half audible. Average inputs, make sure all the outputs at 5, 1, 2, and 4 are about a third above the thresholds into the dynamic range. Loud inputs, make sure that those never reach the loudness discomfort levels of the asterisks at the top of the SPL-ogram. When you've done that, you're done. And you'll find that there's not a whole lot of difference among DSL-5 today and NAL-NL-2 today. A man on a flying horse would be hard-pressed and recognize the difference between them. Hence, I end this seminar. It's been a pleasure. Arigatou gozaimashita. Thank you very much. Ciao. Thank you, Ted. Okay, everyone, this is Fran. And Ted, great presentation today. I think a lot of people are probably still processing and trying to keep up with all the great information. So we'll let everyone kind of gather themselves as they start to submit their questions. I'm really excited, everyone, that almost 300 of you joined today. So imagine that, 300 of your colleagues from all over the world are sitting in on this presentation today with Ted. So really good stuff. So if you have a question, please do type it in for us in your question box on the webinar dashboard, and we'll try to get to as many as we can. And I have a question for you, Ted. Yeah, sure. We have some students and apprentices attending today, so people who are new to the profession. What do you think is the most important takeaway from today for people who are new? The most important takeaway, in my opinion, is always verify with real ear what the manufacturer's software is telling you you're getting. The software is meant to get you into the ballpark, but always remember, manufacturers are just helping you out. They're not the clinicians. They're just giving you a product called a hearing aid. You are the clinicians, and it's your job to verify that the manufacturer's software results are actually being obtained by the hearing aid in place on the client's ear. So remember, manufacturers aren't fitting your hearing aids. You are. And that's why I'm a huge advocate of real ear measurement. Thank you, Ted. And everyone, we're going to see if we can get Ted back on with a real ear measurement webinar sometime this year, because I'm sure a lot of people would love to attend that and learn more about that. Ted, we do have a question from Robin. Robin wants to know, what is the difference with real ear and live speech mapping? The same thing. Live speech mapping is, okay, the difference, many people use a taped version of a speech input. You know, AudioScan uses the caret passage, and other people have different ones. The idea is it's phonetically balanced. In English, that means all the sounds in the particular taped speech passage are presented so that all the sounds of speech are in that passage, and they're in proportion to which all sounds are found in English. That's what phonetically balanced means. Like on your phonetically balanced word lists, all the sounds of English are in those word lists in the proportion in which they are found in English. But live speech is nice to do, too, because it's more real. You can take the spouse. You can take the loved one who can talk a meter away or 18 inches away from the person or three feet away from the person, and you can measure the actual output on the SPL-o-gram. And so that brings more of a reality to your fitting process. So you can use either prerecorded speech samples, as is done in lots of real ear systems, or you can use live speech. And that just brings the touch of humanity into it. So that's a good method. There's really no difference. It's just that one's live and one's taped and prerecorded. Just remember, the prerecorded ones are usually phonetically balanced. I can say that only for the one with which I've got experience, and that's the audio scan with its caret passage. So again, always remember, recorded should be phonetically balanced, like your speech to screen word lists are. Live may not be completely, but still, it's live and it's real. So it's neat. It's a nice wrinkle. Okay. Thanks, Ted. So the questions are starting to pour in, and let me see if I can grab a few here for you, Ted. So Madison wants to know, she asks, what do you think about real ear measurements through the hearing aid in situ versus real ear measurements through external equipment? Real ear. I'm not sure I get the question. I'm not sure I understand it. In situ means the hearing aid is in place in the ear. In my opinion, that's by far the more important one to do, is you've got the tube in the guy's ear canal and make sure that the tube extends about a quarter inch beyond the end of the hearing aid. And, you know, it's 29 meters in from the tragus, let's say, and then the hearing aid is in place. In my opinion, that's what you're doing when you're doing real ear. To be perfectly honest, I'm not sure what Madison means by the former. Just maybe you can clarify that? I'm not sure. I just don't quite understand that. No problem. Madison, if you're still listening, go ahead and, you know, send us another question and sort of clarify your question. We'll see if we can get Ted to answer that. So Pat wants to know, if you recommend using the SI, and forgive me if I'm not reading this correctly, but it looks like using the SII as a metric for goodness of fit. Yeah, it's a good method. SII is actually, I must admit, I'm a bit old school and I'm not looking at it enough, but actually there's a lot of research behind SII. If you've got an SII of, say, 75, that's pretty good. If you've got an SII of 80, that's even more better. So, I mean, you get some good speech intelligibility indexes. That's a good measure. It's excellent. Okay. Emily is asking, many manufacturers have frequency compression and frequency transposition features. How do you suggest verifying output if you're utilizing these features? Go ahead and verify the output and then look at your SII. I'd do it anyway. You want to know what the... The nice thing about Real Ear is it just literally shows you without any lies, without any bias, it just shows you. What is above the guy's thresholds? So, compare. Do the frequency transposition or the frequency compression and then not. And compare and find out what you are getting. And look at your SIIs. What are you getting with and without? As a clinician, make that decision yourself. It's a good... You know, most manufacturers are leaning towards some version of frequency compression and they're doing it for good reason. There is research behind it. As I say, verify with Real Ear and look at Matthew, Mark, Luke, and John. 5, 1, 2, and 4. With average speech inputs. Are those all above thresholds? With or without frequency compression. That's the bottom line, really. Okay. Thanks, Ted. David wants to know, compared to DSL-4, what... Excuse me. Compared to DSL-4, the targets for the adult version of DSL-5 usually show what? DSL-4 targets are usually... For average speech inputs of, say, 70 dBSPL, you'll notice the targets with DSL-4 are almost midpoint, midway in the dynamic range. DSL-5, they're less than a third. So, for average speech inputs, DSL-4 puts the targets midway in the dynamic range. DSL-5 lowers those targets to a third or even less. There's the biggest difference. And I say that's for adults. If you want to really see this, check out an old audio scan, RM-500, the old black suitcase box, that has DSL-4 on it. And just dink in an audiogram, and then check out their SPL-ogram, and you'll see the DSL-4 targets for yourself. It's really a trip. It's quite something to see. DSL-5 backed off considerably. And that's why there's very little difference now between DSL-5 and NAL. Really. Now, today, barely. Okay. Thank you, Ted. Rick wants to know, how much credence should the spouse's voice be given if she mumbles? Not much. Mumbling isn't nice. It's common sense. If somebody's mumbling, it's not very clear. If you listen to the carrot passage or pre-recorded speech samples, they're usually meant to be fairly clear. You're trying to get the, what's going to be happening with average speech, spoken at a regular loudness level, with average clarity. Yakety yakety yak. Hey, how are you? Give me a beer. You know, yakety yakety yak, your mother wears army boots. You know, you're just trying to be normal. So I mean, don't stretch or push the envelope. But I would use the pre-recorded passages myself. That way, you've got a constant. You're always using a constant from person to person to person. Okay. So we have another question. In your opinion, what is the consideration that some of the hearing aid manufacturer software still use DSL-5 as the default fitting formula for pediatrics? That's good. It should be as long as they're using the pediatric version of DSL-5. That's good. I mean, remember, DSL emerged as a pediatric intensive fitting method. Pediatric, you know, little beings, little babies cannot speak for themselves. And so really, I mean, hey, pediatric audiologists are actually about the best audiologists out there. Think of a transport truck driver compared to someone who can drive a Volkswagen. I mean, if you can test a baby, you can test anybody. But if you can test an adult, doesn't mean you can test a baby. DSL-5, I would tend to, what do you call it? My bias is to DSL when it comes to pediatric population. They were the mother of fitting kids. And they know best, I think. So I tend to have a bias toward them. It's because I lived about 60 miles away from London, too. I got a bit of a soft spot for London, Ontario, Canada that way. Okay, Ted, Paul wants to know, do you have to turn off the compression features with real ear speech mapping to determine output matched target? No, you shouldn't. Because you want to find out what the hearing aid is doing at its regular settings. You're trying to use, you're just doing, you may have to reduce, let's say, some compression settings, or you may have to raise the MPO, or you may have to tweak the output to some degree on the software in order to match the targets on real ear. But it doesn't mean you shut off all the bells and whistles on the hearing aid in order to match the targets. And then once you've matched the targets, turn all the bells and whistles back on again. Because that defeats the purpose. With the hearing aid using its bells and whistles is going to be the way it's going to be operating in everyday life. And what you're doing with real ear is finding out what is the hearing aid going to do to best approximate its use in real everyday life. Thanks, Ted. Let's see, we have a question here from Dave. He wants to know, when performing REA measurements, why do they recommend ILTASS versus ICRA or ISTS? When performing real ear aided, why do they recommend LTASS and not ICRA? It depends. I imagine if you go to Europe, you'll see ICRA a lot. International Collegium of Auditory Research or something. It's not bad. It's just meant to be another form of coming up with a long-term average speech spectrum. I kind of forget really what the differences are. Again, I'd probably have to look at my hearing aid notes. But ICRA came out of Europe. And if you happen to cross the pond over the sea, you'll probably see ICRA being used more. I don't find a huge difference myself. You're looking at big picture. Most Canadian and U.S. real ear systems tend to use LTASS. But if you're using ICRA, it doesn't mean you're a bad person. Okay, Ted. We'll take maybe one or two more questions. Okay, Brad wants to know, do you know why NAL-NL2 says to not amplify high frequencies as much? Probably because people didn't find that those lent as much toward effective audibility. You know, like I say, NL2 came up after NL1 because further research showed that with NAL-NL1, all adjacent speech frequencies did not aided. Aided adjacent speech frequencies were not, after all, really contributing equally to the overall loudness of aided speech. Again, I'll say research subsequently showed that with NL1, amplified speech, all adjacent speech frequencies amplified, were not in truth actually really contributing equally to the overall loudness of speech. And the effective audibility of speech. Therefore, some of the highs were changed. So that's just their own internal research done in Australia. That's why NAL-R evolved from NAL. Just subsequent tweaking. And so they just changed things a bit. They probably found that NAL-NL1 gave too little in the highs in order for speech intelligibility. So they probably jacked it up a trifle. A good one to ask, though, is Harvey Dillon. And just email Harvey Dillon. Serious as a heart attack. He loves answering questions. Tell him Ted sent them. Ha! I'm just kidding. No, I'm serious. When I get in these situations, I usually email the source. It's really helpful. But I digress. This has been great. Thanks. You're welcome. Okay, we have time for one more question, Ted. And Dan wants to know, he says, let's talk about reverse slope or low frequency hearing loss. All of the fitting methods blow up due to upward spread of masking. What tricks do you use besides using the threshold equalizing noise or TEN test for fitting those losses? I tend to be very careful amplifying reverse losses. I try not to amplify the lows as much as anyone might think. Low frequency, reverse hearing loss, let's say, for example, 50, 60 dB in the lows and at 1,000, rising to normal hearing, you never know. The person may be deaf as a post in the low frequencies. Reverse hearing losses are kind of scary, freaky in a way. Because they might masquerade. Like profound low frequency hearing loss might come masquerading to the party as actually a moderate reverse loss. Reasons why is because of the traveling wave and the cochlea and the shape of it. And I won't get into that here. But I tend to focus on the mid to high frequencies with reverse loss and try to use something, you know, according to DSL-5 or NL-2. I'll try and work with those fitting methods as to what they might suggest to amplify for the mid to highs. I tend to back off in the lows at 250 and 500 because of the cochlear dead spot concept. And I never use the TEN test. It's not a bad test, but, you know, if you play 250 Hz tone to that guy under headphones and ask him what it sounds like and if he says the sound sucks, basically he's got a dead spot there. So I tend to, you know, anyway, I won't talk too long. But focus on the mid to highs. Great. Thank you, Ted. Well, everyone, that's going to wrap it up for us. We're just going to close out now. Again, thank you, Ted, so much for an excellent presentation today. And we want to thank everyone for joining us today on the IHS webinar, Fitting Methods, Origins, and Evolution of Modern Practices. If we couldn't get to your question or if you'd like to get in contact with Ted, you can reach him at tvenema at nextgenhearing.com and his email address will be on the next slide. And for more information about receiving a continuing education credit, please visit the IHS website at ihsinfo.org. You can click on that webinar banner on the homepage or you can find out more on the webinar tab under Professional Development. IHS members do receive a substantial discount on their CE credit, so if you're not already an IHS member and you'd like to get a continuing education credit for this presentation, you can find out more on how to do that and how to join at ihsinfo.org. Please do keep an eye out for the feedback survey you'll receive today or tomorrow via email. We ask you to take a moment to answer just a few brief questions about the quality of today's presentation and that would really help us out a lot. And we thank you again, everyone, for being with us today. Great turnout and really wonderful feedback so far. And we will see you all at the next IHS webinar.
Video Summary
This webinar focused on the evolution of fitting methods for hearing aids, specifically the distinction between linear fitting methods and compression fitting methods. The presenter discussed the importance of positioning aided speech inside one's dynamic range and explained the concept of the half-gain rule. Various fitting methods were then discussed, including Berger, POGO, Libby's, NAL, and DSL. The differences between DSL-4 and DSL-5 were highlighted, as well as the similarities and differences between NAL-NL1 and NAL-NL2. The presenter emphasized the importance of verifying fitting methods with real ear measurements to ensure accurate and effective hearing aid settings. Finally, the presenter discussed the use of frequency compression and frequency transposition and the need to verify output when using these features. The webinar concluded with a question and answer session.
Keywords
linear fitting methods
compression fitting methods
dynamic range
half-gain rule
Berger fitting method
POGO fitting method
NAL fitting method
DSL-4 fitting method
NAL-NL1 fitting method
real ear measurements
×
Please select your language
1
English