Hear Here Podcast – Season 1, Episode 3: Dr. Rene Gifford
[Music]
Karen Gordon
Hello and welcome to the Hear Here podcast. Our goal with these discussions is to explore new ideas that may help people use devices like Cochlear Implants to hear better
[Music]
Karen Gordon
Alright trusted team, uh, Dr. Blake Papsin and Dr. Sharon Cushing. Uh, we’re, um, talking, uh, with Doctor Rene Gifford who is a research audiologist, uh, has an amazing laboratory at Vanderbilt University and I’m really thankful that, uh, Rene was able to connect with us to do this podcast.
[Music]
Karen Gordon
You guys know Rene?
Sharon Cushing
Absolutely.
Blake Papsin
Yeah, I’ve always been impressed that she’s, you know, her stuff is so applicable clinically, and not just, you know, seems to answer day-to-day questions. I’ve always admired that, and that’s a great team down in Vanderbilt.
Karen Gordon
For sure, and I really loved her description of why she became an audiologist. What you’re going to hear from her, is how much of her clinical, um, interests and skills are guiding her research.
[Music]
I am thrilled to welcome Doctor Rene Gifford to our Hear Here podcast. Welcome, Rene!
Rene Gifford
Thank you so much for having me!
Karen Gordon
So, Rene, tell me what motivated you to become an audiologist and a hearing scientist?
Rene Gifford
I was raised by my grandparents, and they were, you know, of the greatest generation, my grandfather fought in World War II. He was, um, a paratrooper in the 82nd Airborne Division and had actually been shot at, uh, as he was parachuting over the battle of Sicily. So, received a purple heart, um, and one of the wounds he brought back home with him wasn’t so much physical as it was, uh, you know, hearing loss, and he was almost completely deaf in one ear, and then he had a very precipitously sloping high frequency hearing loss in the other. He was told, there’s really nothing medically or surgically that we can do for your hearing loss. He only tried a hearing aid or two and it just, you know, the technology back then just wasn’t what it is today, my grandfather didn’t necessarily benefit from this technology.
It affected, I would say, pretty much every facet of his life, um, I mean he didn’t let it slow him down, you know, he was a civilian engineer at Lakehurst naval base, he had a great number of friends, great guy, um very communicative, but he had to be looking right at you and, you know, so I remember from a very young age, if I needed anything like when I was little, I would grab his chin and kind of pull it over like, “look at me when I’m talking to you!”. Um, and I-I think I got from a very early age, pretty good at facilitating communication for individuals with hearing loss, which of course has helped me today in my career.
Karen Gordon
Thank you for sharing that! How did you become interested in cochlear implant research?
Rene Gifford
I-I actually didn’t start working in this field, at least not, uh, cochlear implants per se. So as an undergrad, I did actually do a little bit of volunteering in Michael Dorman’s lab. You know he’s a speech scientist, cochlear implant researcher, um, you know, instrumental in some of the earlier, uh, studies looking at, uh, cochlear implant efficacy and signal processing, and so forth. I found it interesting and fascinating, but I was really more just interested in “how does the auditory system work?”
So went on, got my Master’s degree in Audiology, went back and got a PhD, actually did my PhD work in, um, normal hearing adults, and I was looking at, like non-linear cochlear processing as effect of aging on, on the auditory system. I was really just sort of almost craving kind of getting back into, “how can we help individuals with hearing loss? What can we do to really help them with communication?”
I remember during the end of my PhD work, um, I was talking with Michael, and he started mentioning, you know, they’re implanting patients with Cochlear Implants and able to preserve some of that low frequency hearing, if they have it, and that is all he had to say. I was hook, line and sinker pulled in and I was like “I have to do my post-doctoral fellowship in this lab”, I mean, it was just, fascinating, and that continues to motivate me.
[Music]
Investigating hearing restoration
Karen Gordon
I want to talk about your interest, first in hearing restoration. This is such an amazing topic to think about where we’ve come from, in cochlear implantation, ’cause we started, um, by really providing cochlear implants for only those people who had no residual hearing in either ear. Maybe give us a little sense of why we’ve come to providing cochlear implants to people who have some residual hearing, in one, or both ears?
Rene Gifford
I mean, we really have come a long way, haven’t we? Traditional conventional cochlear implant candidates, but they’re able to bridge those two worlds and really benefit from that electric and acoustic stimulation. Right, they can detect your sound, they can hear that temporal envelope, they might even be able to get a little bit of periodicity information, but all of that high frequency information that really contains, um, the meaningful crisp fricatives, affricates, things that really make speech clearer and meaningful. It’s just not there.
Challenges of Cochlear Implantation in patients with residual hearing
Karen Gordon
It’s amazing that with longer experience with cochlear implants we’ve begun to see children using their cochlear implants who have more ease of listening than children who, we thought, had so much more residual hearing. This made us think that those hearing aid users might benefit from cochlear implantation. But how can we make sure that the residual hearing these individuals have won’t be compromised by the surgery that’s involved in cochlear implantation?
Rene Gifford
Oh yeah, so, I, this is, of course, the $1,000,000 question, right? Preserve no matter what they have. Not even just for hearing purposes, but, you know, preservation of those intra-cochlear structures, because of course, we know a proportion of people are going to require revision surgery over time, especially our little ones who are getting implanted before the, you know, before 12 months of age and going to be living probably 100 years. I would presume they’re going to need at least one revision, if not, a couple over a lifetime.
They try to approach every surgery as if it were hearing preservation surgery, um, you know, so, um, minimally traumatic surgical techniques, um, minimizing drilling into the otic capsule, um trying to go through the round window, if whenever possible, um, just careful slow, steady insertion of the electrode array… But, of course, you know there are times I’ve been in the OR where a surgeon will say, well, I guarantee I didn’t preserve hearing on that one, you know, like for example, maybe the electrode array flipped out and they had to put it back in and the hearing was perfect, right, and conversely, I’ve seen where there wasn’t any problems whatsoever, they had steroids, they did everything and they lost their hearing.
In fact, some of the children that I have, um, in a clinical capacity and in the laboratory who have this high frequency hearing loss, who have cochlear implants with hearing preservation, they’re our highest performing patients.
[Music]
Blake Papsin
Families come to me and say, um, “Will you preserve hearing?”, I say no, I mean, “I might, but it won’t be there when he’s 40 or she’s 40” and, and it shocks them and maybe they go see Sharon, she promises to preserve it…
Sharon Cushing
No! (laughter)
Blake Papsin
…but I don’t. I promise, I promise to do everything possible to preserve the cochlea and its integrity and often times, we do, you know, we do preserve the hearing, but the families come and say, “Will you preserve the hearing?” and the answer is “No I’ll try, but no, that’s not the game here, is, is, the child needs the cochlear implant”.
But it’s interesting so I, I am, I’m, I’m fascinated and I love the work that Rene and her team do because I can’t do it. Frankly, I’m jealous because, of course, of the thousands of kids that we’ve looked at here, a fraction of them have usable low frequency residual, and if they do, they’ve not gained the auditory experience to know what to do with residual usable hearing. Uh, it’s one of those things that we, we don’t get to play with here in the infant world.
Karen Gordon
Yeah, I wonder, um, when you have a family that comes in, the child has a high frequency hearing loss and really good hearing in the low frequencies, what are their concerns about, you know, the child’s future?
Sharon Cushing
They have a lot of concerns, right? And I think that, that is actually one of the clinical scenarios that’s the hardest to counsel around because their hearing is giving them something more than what we see in our congenitally deaf babies, and so, they are so loathe, both as patients and as parents to give up what they know, um, and I think it speaks to again, what Blake was saying is that you’ve got these two systems, so it’s a best of both worlds scenario where the implant hearing’s giving you something and the, the residuae is giving you something but how do you access them both?
Karen Gordon
The discussion about preserving residual hearing, uh, through the Cochlear Implant surgery is really fascinating but if it can be done, Rene Gifford talks then about how you can use the residual hearing along with the Cochlear Implant. So, take a listen to this.
[Music]
Preserving residual hearing during Cochlear Implant surgery
Rene Gifford
One of the biggest cues that we get that’s beneficial to the electric and acoustic stimulation or EAS that we often refer to, is just fundamental frequency. So, we know that’s just the glottal pulse, and so, that’s how fast our vocal cords, or vocal folds, are vibrating and having the ability to access that information, which is, you know, about 100 Hertz roughly for adult males, approximately 200 for adult females, and up to 500 for children. So, we’re talking really low frequency information. We can provide that to them reliably, because that information is not well preserved in the cochlear implant signal as you know. Um, having access to that really allows people to segregate out the speech stimulus from the background noise, which tends to be more aperiodic.
Karen Gordon
And I wanted to talk to you about, was this idea that we were going to have this weird presentation of sound through acoustics and then acoustic sound that’s translated into these electrical pulses so that there would be a discordant input provided by the two different hearing, electric and the implant acoustic with the hearing aid. What do you think of that?
Rene Gifford
I, I remember hearing the clinician saying, you know, well there are two very different signals and the brain is going to really have a difficult time, you know, integrating that information and we really want you to focus on this new signal, so we’re going to have you not wear your hearing aid and it made sense at face value, but the reality is, the brain is remarkably plastic and can take those two very different signals.
I’ve spent, probably, the majority of my professional career looking at people who are combining hearing aids and cochlear implants. We can still preserve that hearing, that acoustic hearing that’s going to give them a boost, even if it’s in just one ear.
Karen Gordon
Do you think there’s any difference between, you know, trying to combine the electric and acoustic (stimuli) between the ears as opposed to within one ear?
Rene Gifford
Uh, I don’t. There’s no evidence that it’s different. So, we did a study a few years ago where we looked at that exactly, so within an ear, as well, as combining across ears and the amount of benefit that one received was essentially equivalent. You know, there’s lots of evidence to believe that within an ear it would be more beneficial because you don’t have to, you know, integrate the time delay information necessarily across.
Karen Gordon
I’m always amazed that we even thought about combining residual and electric hearing.
Rene Gifford
Yeah, so, it was really clunky in the beginning and many of our patients were fitted with, um ITE (in the ear) hearing aids with a behind the ear, um, or in some cases the body worn (devices).
One of the studies we did way early on, during my postdoc, which I-I absolutely loved this study. We looked in a group, I think it was 12 or 13 individuals, who had hearing preservation in the implanted ear, and we looked at their auditory detection, you know, psycho physical detection, as well as their, um, masking thresholds using these shorter phase maskers.
And so, what we found is we were able to do this pre-operatively and post-operatively in these 12 or 13 individuals. And what we found is that for about 80% of the sample, I recall that their threshold elevation post-operatively was within 10 dB but everybody showed a dramatic difference in their masked threshold patterns, which tells us that something else is going on right at the level of the cochlea that we’re not capturing from the audiogram.
I was actually really kind of discouraged by that finding but then I started thinking well, what are they actually gaining from that acoustic hearing to add it with that electrical stimulus? And it, again, if you can just detect or have access to fundamental frequency information, even just that cue alone, or even just having that low frequency information across the two ears providing ITD information, it can be so, so powerful for that listener, um, speech in noise, localization, spatial release from masking all of the above.
[Music]
Blake Papsin
It is fascinating. I mean, it’s fascinating when you’re listening to music, does visual perception play a role when you’re watching the orchestra? Does it make a difference? The upshot is that, the, um, you know, the, the, the smartest engineer is not nearly as smart as the dumbest cochlea and auditory system.
Karen Gordon
So, I think it’s a good place here to just sum up what this discussion has been about. We’ve talked about, uh, the importance of the residual hearing that can be used along with the electric hearing, um, provided by a Cochlear Implant. And, Rene has convinced me that it is so important to be able to provide, uh, the fundamental frequencies of voice through the acoustic hearing and maybe even get access to some binaural cues. In this next part, she’s going talk, really to audiologists, about how they might be able to do this in practice. So, Sharon has a comment first.
Sharon Cushing
You know, and-and Rene talks about fitting an in the ear hearing aid, and I was tired, like just thinking about the audiologic prowess that would, it would take to do this and, and perhaps again it speaks to the fact that we’re only developing, partly through Rene’s work, the tools to do this properly, efficiently, um, in the clinic.
[Music]
How to use combined electric and acoustic cues in practice – from an audiology perspective
Rene Gifford
So, fitting of EAS devices, um, one of the first things I would encourage everyone, is to make sure that for the acoustic component, let’s connect it to the cochlear implant sound processor. Absolutely critical that we verify that, with real error measures. I’m going to say, I think, this is a non-negotiable.
Second, um, the, where we put the cut-off frequencies. So, for the, for the cochlear implant, providing maybe a little bit more passing through the cochlear implant might actually be a good thing.
So, while there’s not a hard and fast rule, one thing we have found is that almost probably 95% of people, if you can ensure that the first formant is coming through the passband, of the, of the electric stimulus, or through the cochlear implant. But if we can even provide that first formant information through the implant and have a little bit of overlap, this does tend to yield the highest level of performance. Of course, I think we still have a lot of work to do because these studies are, you know, based on 20ish people here and there, 20 people here, but I think we’ve been probably over relying on the acoustic information and under relying on the electric in these EAS patients.
We, you know, we’re seeing preservation rates that we would have thought unheard of, even a decade ago. So, we might have people who have, let’s say they go into surgery with 70-80 DBHL, even in the high frequencies, right, so, it’s technically aid-able, and we could see that, that’s completely preserved. And so, technically, the acoustic component would allow you to provide amplification there, but do they really need it?
So, I always tell my students, the main goal is fundamental and ITD information, that low frequency interaural time differences. We know that ITD’s are really most robust below about 1000 Hertz, so, I do, even if we have the ability to amplify it, I do not amplify beyond 1000 Hertz. We want to provide as much electric as possible, maintain the acoustic, maybe provide a little overlap, and of course, verify that acoustic amplification.
Karen Gordon
I think this reminds us a little bit about the limitations of some of the diagnostic work that we do, maybe we could think about using some of these other more nuanced tests of hearing, um, different frequency resolution and so forth to make decisions around whether to intervene, and with what device, what do you want to tell clinicians about that?
Rene Gifford
I think you, you hit the nail on the head, um, you know, the audiogram is a great tool, but it’s, it’s one tool and it’s really hard to put so much stock in that, when it really is just tonal detection. And, we, no one ever really comes into this, “I’m really having a hard time detecting sounds”, you know, no, they’re like “I can’t understand people”, especially in noise. And so, the audiogram just isn’t necessarily very useful for that.
I remember explicitly this woman coming in and she was probably actually, probably younger than I am today, but I remember thinking she was, you know, an older middle-aged woman and she was complaining that she was really struggling understanding people in noise. And so, I did, you know, audiogram, speech audiometry, of course in quiet, um, and it was normal, and I came in, and I was so proud of that, I was like “Congratulations! This is good news! You know you’ve got normal hearing!” And I remember her just being like, “But I’m really struggling”.
There likely is something we could be thinking about, right? I mean, we know as we age, we start to struggle more and just because someone has normal hearing, they could have had you know 20-30 dB threshold elevation and still be within the range of normal because they started at minus 10. And so, I, I, I replay that visit in my head, it happened, you know, 20 years ago, I’m sure the woman doesn’t remember that at all but I do, um, and I think it’s a way in which we can be starting to rethink the way we approach diagnostic audiology counseling, use of, uh, you know, over the counter types of hearing devices, as that’s opening up, um, there’s so many different options.
[Music]
Karen Gordon
Rene makes a really good point about the need to test and identify dead regions, right, where you think you have some hearing, um, because there are thresholds there but really, those thresholds haven’t got anything to do with some real usable sensory cells. In that part of the cochlea, they’re gone, it’s another part of the cochlea that’s responding.
So let’s hear a little bit more about what Rene thinks about these dead regions in the cochlea.
Rene Gifford
And so, these are regions of the cochlea where there’s little or no surviving inner hair cells. There is no point in amplifying that acoustic region because it’s not going to get transmitted to the auditory nerve and up to auditory cortex. We see that there’s these dead regions but, you know, but, because of off frequency listening you can get a pretty decent audiometric threshold, making sure that we’re really just transmitting information in the area of the cochlea that’s primed to receive it.
So, I mean, if it were up to me, we’d be doing dead region testing in the clinic on every single person. Um, and it really is only less than a 5-minute test. Basically, you’re providing a masking noise that is basically forcing the listener to focus on the frequency region at the target. So, you’re just masking, you know, above and below, slightly, so it’s sort of like a notch noise area, and you’re really focusing, and because, often we know, especially with these high frequency losses, if you increase the level of the incoming state signal high enough, it’s not just upward spread of masking, there’s some downward spread as well.
[Music]
Karen Gordon
So, to this point, I think we’ve talked about such interesting things, the last being clinical, um, ideas of how to look for problems in the cochlea and how to fit devices like hearing aids and Cochlear Implants, often, together, in the same person who has hearing loss in order to help them hear in difficult listening situations. In the next part, we turn our attention to the placement of the Cochlear Implant in the inner ear or cochlea and how important that might be for users to hear clearly.
[Music]
Karen Gordon
How much of an effect, you think, where you put the cochlear implant has on children’s outcomes?
Blake Papsin
Very little. I mean, there’s people, will spend hours talking about peri- versus anti-modiolor electrode. I, I don’t give myself that much credit to think I have the ability to affect outcome by surgical placement. Now, careful surgical placement, yes. Um, soft surgical technique, yes. Minimization of complications, absolutely. Restoring potential for re-implantation, 100%. Uh, care that the electrodes don’t extrude and that there’s closure of the middle ear space, absolutely 100%. Uh, fixation of the device. Absolutely. Got em all. But electrode, no, sorry, don’t buy. You?
Sharon Cushing
I think it depends on what outcome you’re measuring.
Karen Gordon
Yeah, I agree with that. You’re underselling the importance of what you do when you put that cochlear implant in, in the operating room, because we, we, do know that, you know, there are going to be a different kind of stimulation when that electrode array is close to the middle of the cochlea or the modiolus, than when it’s further away.
Blake Papsin
I, look, I completely agree with you that, we are in the process now, with inter cochlear electrophysiology, and real time imaging, and thinner, more discrete, modiolor hugging electrodes. I, I think that we’re at the point where we’re very close to robotic insertion with real time. Uh, I will get, we’re going to get better and absolutely that will allow discrete stimulation and then we’re off to the races.
[Music]
Doing clinical research as an audiologist
Rene Gifford
We know that every single patient is going to, even if they’re, like, the very best performer, struggle with speech in noise. One of the issues is, of course, channel interaction, or this spread of electrical excitation, right? We have these, um, electrodes that are very close together, they’re in this fluid filled medium, this highly conductive, fluid and so, you present an electrical signal, and it spreads. And so, we’re not getting that fine frequency resolution that we get with normal hearing, or even, you know, less mild to moderate hearing loss. If we can actually see where these electrodes reside in each of the Scala and what’s the sort of distance between, you know, the center of the contact to the nearest, uh, modiolor surface, where those, you know, spiral ganglia, primary auditory neurons are located. Could we potentially start to come up with these algorithms that selectively deactivate electrodes to try to help channel independence or make, you know, each channel that is provided a little bit more spatially selective.
Our very first person was, you know, a great inspiration because she came in and there were two electrodes that was recommended that were deactivated. And she had, so, she was a bilateral implant recipient, one ear, um, was like, she was getting 90% plus on everything. And so, we just turned off those two electrodes and I said, you know, and now we’re going to see you back in a month and we’ll repeat the testing. She sent me an email in the morning and she said, “I don’t know what you did but I was able to talk to my mother on the phone with my poor ear and I swear I heard as well as I did my other ear.” She went from like 42ish percent CNC (speech perception test) to like 84, overnight! Now it’s not a panacea, and it doesn’t work that way for every single person. I sort of feel like that was like the science Gods just making things a little bit easier on that first one, so we would get encouraged to keep going, um, but it, it has been really a great, um, research focus.
We are doing a study on this in children, um, we’ve enrolled close to 20. Well, actually we have 40-some children who we saw in the first phase and now we’re in a clinical trial where we have about 20. So far, we don’t really see a decrement for any of the children and we either see sort of, you know, equivocal performance, or, uh, significant improvements in various tasks of auditory processing, spectral temporal processing, and speech understanding. So, it’s really exciting, I just absolutely love it.
Karen Gordon
Well, I think you’re, you give such a nice example of how clinical research happens; there’s an idea, and then you know, we try it and then we go into a full-fledged study, and protocol, and get a sense of data across so many different individuals.
[Music]
Sharon Cushing
When we break it down, audiologists are the interface with these families. And so, you know, anything that we move forward from a research perspective, or anything that is going to be delivered by audiologists in terms of programming.
Blake Papsin
I think audiologists are human! You’re right, they’re the interface between the engineering and the humanity. And so, the research that they direct is not nearly as much, um, pocket protector white, white coat, um, as, uh…, other, other sort of labs, many of those labs, of course produce stuff that’s very important to us, but audiologically driven research is, is pretty much the focus.
I mean, I, I that’s why we’ve built our entire program around it, not on, you know, basic science, molecular engineering, uh, or engineering, it’s, it’s an audiologic based thing, ’cause this is humans looking after humans. If I’m the first one to say, is this applicable? Is this our question? Is this about humans or is this about, uh a science publication, ’cause I’m not interested in that.
Karen Gordon
Well, I think that’s also true that once you have that training, um, to think about research and to incorporate research as a clinician and as an audiologist, then you’re going to be able to face somebody and help in a different way. Hopefully a more nuanced way. I think that really comes very clear with Rene because she-she really is in the clinic and in the lab, um, and melds the two together so well.
[Music]
So that brings us to the end of our chat with Dr. Rene Gifford. And, Rene I want to thank you so much for all of your insights that you’ve brought to this podcast. talking about the impact of hearing loss and the use of acoustic with electrical hearing through cochlear implants, how audiologists might approach using the two together. And what they can do for people coming into the clinic. And thinking about the advances that we can make in research and translating it back. So thank you so much for sharing all of your experience, all the that work you’ve done, and the innovative thinking you’ve provided.
Rene Gifford
Thank you, Karen! This was a lot of fun and I just love that you’re doing this podcast! We could probably talk about this all day… It’s just so exciting and I just hope to motivate people to get more excited about this and maybe even, uh, motivate someone to pursue a career in auditory research.
[Music]
Karen Gordon
This concludes this episode of the Hear Here podcast. I hope you really enjoyed our discussion here with Dr. Rene Gifford and hope to see you back, enjoying another episode of our podcast soon.
[Music]
You can catch other episodes of the Hear Here podcast, there’s a link on our website, search Archie’s Cochlear Implant Lab Sickkids Research Institute, or wherever you get your podcasts.
The Hear Here podcast is put together by me, Dr. Karen Gordon, with my colleagues at the hospital for Sick Children in Toronto, Canada, Drs Blake Papsin and Sharon Cushing with a tremendous production and advisory team, Sofia Olaizola, Rachel Bedder, and Maria Khan.
[Music]
The wonderful music was composed and performed by Dr. Blake Papsin.
[Music]