RELATIONSHIP BETWEEN HEARING OUTCOMES AND THE POSITION OF THE ELECTRODES ARRAY

Get Complete Project Material File(s) Now! »

TRASDUCTION OF THE SIGNAL: FROM THE SOUNDWAVE TO THE ELECTRIC STIMULUS

The detection of the sound stimulus and its conversion to an equivalent electrical waveform, denominated mechanoelectrical transduction, occurs in the hair cells of the organ of Corti. The final mechanical event in the cochlear transduction process is the bending of the stereocilia. Basilar membrane deformation causes a shearing action between the reticular and tectorial membranes. Sound-induced motion of the basilar membrane excites the hair cells by deflecting their hair bundles to activate mechanoelectrical transduction ion channels (Hudspeth, 1989). Because the long OHC cilia are attached to both membranes, they are bent. In contrast, the IHC cilia, and possibly also the shorter OHC cilia, which are not attached to the tectorial membrane, bend in response to some mechanism other than displacement shear. One proposition is that this process may involve fluid streaming between the sliding parallel plates formed by the reticular and tectorial membranes.

TONOTOPIC REPRESENTATION IN THE AUDITORY SYSTEM

Hair cells in the different regions of the cochlea are maximally stimulated by different frequencies. This results in a spatial representation of sound frequency across the basilar membrane where the hair cells are located. The fundamental research by von Békésy in 1970 brought experimental proof that the cochlea performs a spectral analysis of sounds; he demonstrated that a tone of a certain frequency caused the highest vibration amplitude at a certain point along the basilar membrane. This means that each point along the basilar membrane is tuned to a certain frequency and a frequency scale can be identified along the cochlea, with high frequencies located at the base and low frequencies at the apex of the cochlea (Figure 2.1). As a consequence, each hair cell produces responses that, near the threshold of hearing, are tuned to a characteristic frequency. Von Békésy convincingly demonstrated that sounds set up a traveling wave motion along the basilar membrane and this traveling wave motion is the basis for the frequency selectivity. He concluded that the motion of the basilar membrane becomes a traveling wave motion because the stiffness of the basilar membrane decreases from the base of the cochlea to its apex. Rhode (1980) successively showed that the frequency selectivity of the basilar membrane deteriorates after death therefore metabolic energy might be necessary to maintain the high degree of frequency selectivity of the basilar membrane. Besides the frequency selectivity decreased when the intensity of the test sounds was increased above threshold. The reason that the frequency selectivity of single auditory nerve fibers is intensity dependent is the non-linearity of the vibration of the basilar membrane. The explanation of this phenomenon is that the outer hair cells are active elements that make the tuning of the basilar membrane non-linear and sharpens the tuning of the basilar.
The resulting representation is a topographic map of sound frequency, also called a tonotopic representation or tonotopic map, the fundamental principle of organization in the auditory system (Figure 2.1).
However, the studies of Von Bekesy were done using pure tone presented in a quiet background, and the frequency threshold tuning curves of single nerve, cells that have been used to establish the tonotopic organization may not reflect the function of the auditory system under normal conditions because frequency threshold tuning curves are obtained by determining the threshold to pure tones at very low sound intensities.
Two hypotheses have been presented to explain the physiologic basis for discrimination of frequency or pitch. One hypothesis, the place principle based on the studies of Von Bekesy, claims that frequency discrimination is based on the frequency selectivity of the basilar membrane resulting in frequency being represented by a specific place in the cochlea and subsequently, throughout the auditory nervous system. It is believed that high frequency coding is dominated by this physiological mechanism. The other hypothesis, the temporal principle, claims that frequency discrimination is based on coding of the waveform (temporal pattern) of sounds in the discharge pattern of auditory neurons, known as phase locking, this coding strategy seems to be more dominant in conveying pitch for low-frequency signals and this time-based mechanism locks onto the temporal fine structure of the signal and conveys intonation by keeping the auditory nerve fibers’ firing rate at the same frequency as the signal. There is considerable experimental evidence that both the spectrum and the time pattern of a sound are coded in the responses of neurons of the classical ascending auditory nervous system including the auditory cerebral cortices. While there is ample evidence that both these two representations of frequency are coded in the auditory nervous system, it is not known which one of these two principles is used by the auditory system in the discrimination of natural sounds or for the discrimination of unnatural sounds. It may be that the place and the temporal principle of frequency discrimination may be used in parallel by the auditory system for discrimination of sounds of different kinds.
The fact that studies indicate that temporal information plays a greater role than place coding in discrimination of complex sounds such as speech sounds does not mean that spectral analysis (the place principle) cannot provide the basis for speech intelligibility. This observation underlines that the auditory system possesses a considerable redundancy with regard to the role of frequency discrimination as a basis for speech discrimination.

BINAURAL HEARING PROCESSING OF SIGNAL

Hearing with two ears is the basis for directional hearing, which is an ability to determine the direction to a sound source in the horizontal plane. Binaural hearing provides several benefits over monaural hearing, especially under challenging listening conditions: discrimination of sounds in a noisy background is better with two ears than with one, the “unmasking” from hearing with two ears benefits from both a time (phase) difference between the sounds and also from intensity differences. Studies have shown that the advantage of hearing with two ears is greater if the masking noise in the two ears is different such as shifted by 180°. Two basic effects that involve advantages for binaural hearing are binaural squelch effect and head shadow effect. Binaural squelch effect refers to the capacity of the central auditory system to process the stimuli received from each ear and to reproduce it with a higher signal-to-noise ratio (SNR) by comparing interaural time and intensity differences. On the other hand, head shadow effect results from the physical placement of the head, which acts as an acoustic barrier and leads to an increase in SNR in the ear far from the noise when signal and noise are spatially separate (Moore 1991). Research in normal hearing subjects indicated a 3 dB improvement in squelch for the binaural speech recognition threshold and an average increase of 3 dB SNR for head shadow effect that is more dominant for attenuation of high frequencies and can cause even 8 to 10 dB of improvement.
If hearing is impaired more in one ear than in the other ear, the advantages of hearing with two ears diminishes. People often become aware that they have an asymmetric hearing loss because they have difficulties in understanding speech where many other people are talking.
In normal hearing people, sound localization abilities in the horizontal plane depend primarily on acoustic cues arising from differences in arrival time and level of stimuli at the two ears. Localization of unmodulated signals up to approximately 1500 Hz is known to depend on the interaural time difference arising from disparities in the fine-structure of the waveform. The prominent cue for localization of high-frequency signals is the interaural level difference cue (Blauert, 1982). However, it has also been well established that, for higher frequency signals, interaural time difference information can be transmitted by imposing a slow modulation, or envelope, on the carrier (Bernstein, 2001). The use of modulated signals with high frequency carriers is particularly relevant to stimulus coding by CI processors that utilize envelope cues and relatively high stimulation rates (Skinner et al., 1994; Vandali et al., 2000; Wilson and Dorman, 2007). Both the differences in the arrival time and the difference in the intensity of the sound at the two ears are determined by the physical shape (acoustic properties) of the head and the outer ears, together with the direction to the sound source. The sound arrives at the same time at the two ears when the head is facing the sound source (azimuth = 0°) and directly away from the sound source (azimuth = 180°). At any other azimuth, sounds reach the two ears with a time differences and the sound intensity at the entrance of the ear canals of the two ears is different. The difference in the sound intensity has a more complex relationship to the azimuth than the interaural time difference.

SENSORINEURAL DEAFNESS

Deafness may occur before or during birth (prenatal and perinatal, respectively) and in this case is referred to as congenital. It can also occur after birth (postnatal). Congenital deafness may arise from genetic causes, chromosomal abnormalities, or diseases affecting the mother during pregnancy. Postnatal deafness is mostly from disease or injury, but may also be the result of delayed genetic effects. In adults the most common cause of hearing loss is the presbyacusis. Other causes are the exposure to noise or occupational hearing loss, chronic otitis media, exposure to ototoxic agents, sudden sensorineural hearing loss, Menière disease or autoimmune inner ear disease.

Genetic hearing loss Nonsyndromic

Genetic deafness frequently occurs alone without other abnormalities. In about 80% of children with nonsyndromic deafness, the inheritance is autosomal recessive (Dahl et al, 2001). Using DNA markers, genetic linkage studies have shown over 20 genes for nonsyndromic deafness. A mutation of the connexin 26 gene has been found to account for up to 50% of cases of nonsyndromic deafness in children of European descent (Denoyelle et al 1997). Connexin 26 belongs to a family of proteins that mediate the exchange of molecules between adjacent cells. Connexin is highly expressed in the cells lining the cochlear duct and the stria vascularis. It is thought that it is important for the recycling of K+ ions from sensory hair cells into the endolymph in the process of transduction of sound to electrical signals.
Cochlear malformations were classified as Michel deformity, common cavity deformity, cochlear aplasia, hypoplastic cochlea, incomplete partition types I (IP-I) and II (IP-II) (Mondini deformity) (Sennaroglu, 2002). Incomplete partition type I (cystic cochleovestibular malformation) is defined as a malformation in which the cochlea lacks the entire modiolus and cribriform area, resulting in a cystic appearance, and there is an accompanying large cystic vestibule. IP-I and cochlea hypoplasia may be the result of a defective vascular supply from the blood vessels of the IAC (Sennaroglu 2016). In IP-II there is a cochlea consisting of 1.5 turns (in which the middle and apical turns coalesce to form a cystic apex) accompanied by a dilated vestibule and enlarged vestibular aqueduct. Good results have been reported both for cochlear implants with connexin 26 gene mutation (Rayess et al. 2015), the IP-I (Berrettini, 2013) and with classic Mondini deformity (Manzoor, 2016).

READ  Radio frequency-Plasma-enhanced chemical vapor deposition

PROCESSING OF THE SIGNAL IN COCHLEAR IMPLANTS

Cochlear implants (CIs) bypass the frequency selectivity of the basilar membrane and replace it by a more rough spectral auditory resolution than what the cochlea normally provides. The success of cochlear implants in providing useful hearing may appear surprising because even multichannel cochlear implants cannot replicate the spectral analysis that occurs in the cochlea and in the majority of speech strategies do not include temporal coding of sounds. The spectral information is roughly coded through multi-channel representation following the auditory system’s natural tonotopic organization; i.e., acoustic spectral information is normally represented from low to high frequency in a corresponding spatial progression within the cochlea.
The fact that CIs are successful in providing good speech comprehension without the use of any temporal information sets the importance of the temporal code of frequency in question, and confirms that the auditory system could adequately discriminate speech sounds on the basis of information on power in a few frequency bands. Three main reasons why cochlear implants are successful in providing speech intelligibility may be identified:
1 The redundancy of the natural speech signal.
2 The redundancy of the processing capabilities of the ear and the auditory nervous system.
3 The large ability of the central nervous system to adapt through expression of neural plasticity. Individuals with normal hearing can either understand speech solely on the basis of temporal information or on spectral (place) information as well. This means that frequency discrimination can rely both on the place and the temporal hypothesis. The finding that good speech comprehension can be achieved on the basis of only the spectral distribution of sounds seems to contradict the results of animal studies of coding of the frequency of sounds in the auditory nerve (Sachs and Young, 1976; Young and Sachs, 1979). Such studies have shown that temporal coding of sounds in the auditory system is more robust than spectral coding. On these grounds it has been concluded that temporal coding is important for frequency discrimination (see Chapter 2). These and other studies have provided evidence that the place principle of coding of frequency is not preserved over a large range of sound intensities and that it is not robust (Moller, 1977). On the basis of these findings it was concluded that the place principle is of less importance for frequency discrimination than temporal information. It was always assumed that frequency discrimination according to the place principle would require narrow filters and many filters covering the audible frequency range but studies in connection with development of channel vocoders, and more recently in connection with cochlear implants, showed clearly that speech comprehension could be achieved using much broader and much fewer filters. One of the strongest arguments against the place coding hypothesis has been the non-linearity of the basilar membrane frequency tuning. Indeed, the frequency to which a certain point on the basilar membrane shifts at high intensity of stimulation (see chapter 2). This lack of robustness of cochlear spectral analysis has been regarded an obstacle to the place hypothesis for frequency discrimination. Since the band pass filters in cochlear implants do not change with sound intensity, the cochlear implants may actually have an advantage over the cochlea as a “place” frequency analyzer. The spectral acuity of the cochlea also changes with sound intensity, which is not the case for the filters used in cochlear implants.

CODING STRATEGIES OF THE SIGNAL

Since the introduction of the first stimulation strategies in multi-channel CIs over 30 years ago, a  number of diverse sound-processing strategies have been developed and evaluated. These strategies focus on better spectral representation, better distribution of stimulation across channels, and better temporal representation of the input signal. Three different main processing strategies have been developed and used since now: the n-of-m approach (SPEAK, ACE, APS) in which the speech signal is filtered into m bandpass channels and the n highest envelope signals are selected for each cycle of stimulation. The CIS strategy filters the speech signal into a fixed number of bands, obtains the speech envelope, and then compresses the signal for each channel. On each cycle of stimulation, a series of interleaved digital pulses rapidly stimulates consecutive electrodes in the array. The CIS strategy is designed to preserve fine temporal details in the speech signal by using high-rate, pulsatile stimuli. The Simultaneous Analog Stimulation (SAS) filters and then compresses the incoming speech signal for simultaneous presentation to the corresponding enhanced bipolar electrodes. The relative amplitudes of information in each channel and the temporal details of the waveforms in each channel convey speech information. All the coding strategies that have been in use in CIs during the last 15–20 years rely mainly on envelope information (Muller et al 2012). In general, users of these coding strategies show good to very good speech perception in quiet, moderate speech perception in noise and poor to moderate music appreciation. The 4 most commonly used in the present day are: ACE (Advanced Combination Encoder) with channel selection based on spectral features, MP3000 with channel selection and stimulation based on spectral masking, FSP (Fine Structure Processing) based on enhancement of temporal features, and HiRes120 (High Resolution) with temporal feature enhancement and current steering to improve the spatial precision of stimulus delivery (Fig. 3.2).
Pitch perception with CIs is extremely poor. This is due both to limitations at the interface with electrical stimulation (spread of excitation) and to imprecise coding of temporal cues. The large spread of excitation in the cochlea and the small number of channels to code the low frequencies with electrical stimulation reduces the spectral resolution and therefore the precision of spectral pitch. Another limitation with electrical stimulation is the inability of CI users to perceive the temporal fine structure. Therefore the only remaining mechanism is periodicity pitch perception, which is much weaker than temporal fine structure’s pitch and limited by the maximum frequency at which pitch changes are perceived, around 300 Hz. Furthermore, temporal envelope fluctuations are not always accurately coded by current sound-processing strategies. Specifically, the transmission of tonal speech information, such as prosodic contour or speaker gender, tonal languages (e.g. Mandarin Chinese) as well as music perception and appreciation is poor in CI users compared to normal-hearing listeners. To improve the transmittance of fine structure information, the fine structure processing (FSP) coding strategy was developed by MED-EL (Innsbruck, Austria). FSP is intended to better enable users to perceive pitch variations and timing details of sound. The aim of this coding strategy is to represent temporal fine structure information present in the lowest frequencies of the input sound signals by delivering bursts of stimulus pulses on one or several of the corresponding CI electrodes.
These bursts contain information about the temporal fine structure in the lower frequency bands that is not available in the envelope of those signals, potentially leading to improved perception for CI users. The FSP, FS4, and FS4-p are the 3 temporal fine structures coding strategies currently available (Riss et al. 2014). Results indicate that FSP performs better than CIS+ in vowel and monosyllabic word understanding. Subjective evaluation demonstrated strong user preferences for FSP when listening to speech and music (Muller et al 2012). Other authors demonstrate that there was no difference in speech perception with FSP compared to CIS at an extended frequency spectrum; the extended frequency spectrum in the low frequencies might explain a benefit of FSP observed in other studies (Riss et al 2014).

Table of contents :

INTRODUCTION
1 ANATOMY
1.1 EXTERNAL AND MIDDLE EAR
1.2 THE INNER EAR: COCHLEA AND POSTERIOR LABYRINTH
1.3 THE COCHLEA
1.4 IMPLICATION OF COCHLEAR ANATOMY ON COCHLEAR IMPLANTATION
1.5 BLOOD SUPPLY TO INNER EAR
2 PHYSIOLOGY AND PATHOPHYSIOLOGY OF THE HEARING 
2.1 TRASDUCTION OF THE SIGNAL: FROM THE SOUNDWAVE TO THE ELECTRIC STIMULUS
2.2 TONOTOPIC REPRESENTATION IN THE AUDITORY SYSTEM
2.3 BINAURAL HEARING PROCESSING OF SIGNAL
2.4 SENSORINEURAL DEAFNESS
2.4.1 Genetic hearing loss
2.4.2 Presbycusis
2.4.3 Ototoxic agents
2.4.4 Menière disease
2.3.5 Noise exposure
3 THE COCHLEAR IMPLANTS
3.1 PROCESSING OF THE SIGNAL IN COCHLEAR IMPLANTS
3.2 CODING STRATEGIES OF THE SIGNAL
3.3 INDICATIONS FOR COCHLEAR IMPLANTATIONS
3.4 HEARING PRESERVATION IN COCHLEAR IMPLANTS
3.5 VARIABILITY OF HEARING RESULTS IN COCHLEAR IMPLANTED PATIENTS
4 CONE BEAM CT FOR IDENTIFICATION OF SCALAR POSITIONING OF THE ELECTRODES
4.1 CONE BEAM COMPUTED TOMOGRAPHY (CBCT)
4.2 CONE BEAM CT SCAN FOR ELECTRODE POSITION ASSESSMENT IN TEMPORAL BONE AND IN IMPLANTED PATIENTS
4.2.1 Introduction
4.2.2 Materials and methods
4.2.2 Results
4.2.3 Discussion
5 RELATIONSHIP BETWEEN HEARING OUTCOMES AND THE POSITION OF THE ELECTRODES ARRAY
5.1 DEPTH OF INSERTION AND PROXIMITY TO THE MODIOLUS: SHORT AND LONG TERM INFLUENCE IN BILATERAL COCHLEAR IMPLANTED PATIENTS .
5.1.1 Introduction
5.1.2 Materials and methods
5.1.3 Results
5.1.4 Discussion
5.2 INFLUENCE OF SCALAR TRANSLOCATION ON THE AUDITORY PERFORMANCE
5.2.1 Introduction
5.2.2 Material and methods
5.2.3 Results
5.2.4 Discussion and conclusion
5.3 DISCUSSION
6 INSERTION FORCES AND ARRAY TRANSLOCATION: A TEMPORAL BONE STUDY
6.1 MEASURING THE INSERTION FORCES IN COCHLEAR IMPLANTATION .
6.2 IS THE INSERTION FORCE RELATED TO TRAUMATISM?
6.2.1 Introduction
6.2.2 Material and methods
6.2.3 Results
6.2.3 Discussion
6.3 DISCUSSION
7 Discussion and perspectives
References

GET THE COMPLETE PROJECT

Related Posts