Index of content:
Volume 104, Issue 6, December 1998
- SPEECH PERCEPTION 
104(1998); http://dx.doi.org/10.1121/1.423939View Description Hide Description
Studies involving human infants and monkeys suggest that experience plays a critical role in modifying how subjects respond to vowel sounds between and within phonemic classes. Experiments with human listeners were conducted to establish appropriate stimulus materials. Then, eight European starlings (Sturnus vulgaris) were trained to respond differentially to vowel tokens drawn from stylized distributions for the English vowels /i/ and /ɪ/, or from two distributions of vowel sounds that were orthogonal in the plane. Following training, starlings’ responses generalized with facility to novel stimuli drawn from these distributions. Responses could be predicted well on the bases of frequencies of the first two formants and distributional characteristics of experienced vowel sounds with a graded structure about the central “prototypical” vowel of the training distributions. Starling responses corresponded closely to adult human judgments of “goodness” for English vowel sounds. Finally, a simple linear association network model trained with vowels drawn from the avian training set provided a good account for the data. Findings suggest that little more than sensitivity to statistical regularities of language input (probability–density distributions) together with organizational processes that serve to enhance distinctiveness may accommodate much of what is known about the functional equivalence of vowel sounds.
The recognition of sentences in noise by normal-hearing listeners using simulations of cochlear-implant signal processors with 6–20 channels104(1998); http://dx.doi.org/10.1121/1.423940View Description Hide Description
Sentences were processed through simulations of cochlear-implant signal processors with 6, 8, 12, 16, and 20 channels and were presented to normal-hearing listeners at +2 db S/N and at −2 db S/N. The signal-processing operations included bandpass filtering, rectification, and smoothing of the signal in each band, estimation of the rms energy of the signal in each band (computed every 4 ms), and generation of sinusoids with frequencies equal to the center frequencies of the bands and amplitudes equal to the rms levels in each band. The sinusoids were summed and presented to listeners for identification. At issue was the number of channels necessary to reach maximum performance on tests of sentence understanding. At +2 dB S/N, the performance maximum was reached with 12 channels of stimulation. At −2 dB S/N, the performance maximum was reached with 20 channels of stimulation. These results, in combination with the outcome that in quiet, asymptotic performance is reached with five channels of stimulation, demonstrate that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels are needed to achieve a given level of performance.
Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing104(1998); http://dx.doi.org/10.1121/1.423941View Description Hide Description
Current multichannel cochlear implant devices provide high levels of speech performance in quiet. However, performance deteriorates rapidly with increasing levels of background noise. The goal of this study was to investigate whether the noise susceptibility of cochlear implant users is primarily due to the loss of fine spectral information. Recognition of vowels and consonants was measured as a function of signal-to-noise ratio in four normal-hearing listeners in conditions simulating cochlear implants with both CIS and SPEAK-like strategies. Six conditions were evaluated: 3-, 4-, 8-, and 16-band processors (CIS-like), a 6/20 band processor (SPEAK-like), and unprocessed speech. Recognition scores for vowels and consonants decreased as the S/N level worsened in all conditions, as expected. Phoneme recognition threshold (PRT) was defined as the S/N at which the recognition score fell to 50% of its level in quiet. The unprocessed speech had the best PRT, which worsened as the number of bands decreased. Recognition of vowels and consonants was further measured in three Nucleus-22 cochlear implant users using either their normal SPEAK speech processor or a custom processor with a four-channel CIS strategy. The best cochlear implant user showed similar performance with the CIS strategy in quiet and in noise to that of normal-hearing listeners when listening to correspondingly spectrally degraded speech. These findings suggest that the noise susceptibility of cochlear implant users is at least partly due to the loss of spectral resolution. Efforts to improve the effective number of spectral information channels should improve implant performance in noise.
104(1998); http://dx.doi.org/10.1121/1.423942View Description Hide Description
This study examined both the identification and discrimination of vowels by three listener groups: elderly hearing-impaired, elderly normal-hearing, and young normal-hearing. Each hearing-impaired listener had a longstanding symmetrical, sloping, mild-to-moderate sensorineural hearing loss. Two signal levels [70 and 95 dB sound-pressure level (SPL)] were selected to assess the effects of audibility on both tasks. The stimuli were four vowels, /ɪ,e,ɛ,æ/, synthesized for a female talker. Difference limens (DLs) were estimated for both and formants using adaptive tracking. Discrimination DLs for formants were the same across groups and levels. Discrimination DLs for showed that the best formant resolution was for the young normal-hearing group, the poorest was for the elderly normal-hearing group, and resolution for the elderly hearing-impaired group fell in between the other two at both signal levels. Only the elderly hearing-impaired group had DLs that were significantly poorer than those of the young listeners at the lower, 70 dB, level. In the identification task at both levels, young normal-hearing listeners demonstrated near-perfect performance while both elderly groups were similar to one another and demonstrated lower performance The results were examined using correlational analysis of the performance of the hearing-impaired subjects relative to that of the normal-hearing groups. The results suggest that both age and hearing impairment contribute to decreased vowelperception performance in elderly hearing-impaired persons.