Index of content:
Volume 113, Issue 4, April 2003
- PSYCHOLOGICAL ACOUSTICS 
113(2003); http://dx.doi.org/10.1121/1.1555613View Description Hide Description
It is hypothesized that channel-interaction in cochlear implant listeners as measured in a modulation-masking experiment would be influenced by both the tonotopic overlap between masker and signal as well as an interaction between their envelopes. Two experiments were conducted to measure the effects of maskers with noisy and steady-state envelopes on modulation detection by four adult Nucleus-22 cochlear implant listeners, as a function of tonotopic distance between the masker and the signal. In the first experiment, we measured detection thresholds for a 50-Hz modulation in the envelope of a 500-Hz carrier pulse train, in the presence of a masker stimulating regions basal and apical to, as well as overlapping with, the signal. The maskers had two kinds of envelopes: (i) amplitude-modulated by flat-spectrum noise (NAM) and steady-state at a level corresponding to the maximum of the noise fluctuation range. In general, modulation thresholds obtained in the presence of the NAM maskers significantly exceeded thresholds obtained with the corresponding maskers. The ratio ρ of the threshold modulation depth m obtained with the NAM masker to that obtained with the masker was defined as a conservative index of “envelope masking.” In the second experiment, ρ was determined for two different tasks: the detection of modulation at 20 Hz and steady-state intensity increment detection. Compared to the 50-Hz modulation detection results, the ratio ρ was reduced for the 20-Hz modulation detection task and even more so for the steady-state increment detection task. It is concluded that channel-interaction can be significantly increased in cochlear implant listeners when dynamic stimuli are used in place of steady-state stimuli.
113(2003); http://dx.doi.org/10.1121/1.1558378View Description Hide Description
The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudnessmodel described by McKay et al. [J. Acoust. Soc. Am. 110, 1514–1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds’ duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.
113(2003); http://dx.doi.org/10.1121/1.1558357View Description Hide Description
In normal acoustic hearing the mapping of acoustic frequency information onto the appropriate cochlear place is a natural biological function, but in cochlear implants it is controlled by the speech processor. The cochlear tonotopic range of the implant is determined by the length and insertion depth of the electrode array. Conventional cochlear implantelectrode arrays are designed for an insertion of 25 mm inside the round window and the active electrodes occupy 16 mm, which would place the electrodes in a cochlear region corresponding to an acoustic frequency range of 500–6000 Hz. However, some implantspeech processors map an acoustic frequency range from 150 to 10 000 Hz onto these electrodes. While this mapping preserves the entire range of acoustic frequency information, it also results in a compression of the tonotopic pattern of speechinformation delivered to the brain. The present study measured the effects of such a compression of frequency-to-place mapping on speech recognition using acoustic simulations. Also measured were the effects of an expansion of the frequency-to-place mapping, which produces an expanded representation of speech in the cochlea. Such an expanded representation might improve speech recognition by improving the relative spatial (tonotopic) resolution, like an “acoustic fovea.” Phoneme and sentence recognition was measured as a function of linear (in terms of cochlear distance) frequency-place compression and expansion. These conditions were presented to normal-hearing listeners using a noise-band vocoder, simulating cochlear implantelectrodes with different insertion depths and different number of electrode channels. The cochlear tonotopic range was held constant by employing the same noise carrier bands for each condition, while the analysis frequency range was either compressed or expanded relative to the carrier frequency range. For each condition, the result was compared to that of the perfect frequency-place match, where the carrier and the analysis bands were perfectly matched. Speech recognition in the matched conditions was generally better than any condition of frequency-place expansion and compression, even when the matched condition eliminated a considerable amount of acoustic information. This result suggests that speech recognition, at least without training, is dependent on the mapping of acoustic frequency information onto the appropriate cochlear place.