Index of content:
Volume 103, Issue 2, February 1998
- SPEECH PERCEPTION 
Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification103(1998); http://dx.doi.org/10.1121/1.421224View Description Hide Description
Two experiments were conducted to examine the relationship between audibility and speech recognition for individuals with sensorineural hearing losses ranging from mild to profound degrees. Speech scores measured using filtered sentences were compared to predictions based on the Speech Intelligibility Index (SII). The SII greatly overpredicted performance at high sensation levels, and for many listeners, it underpredicted performance at low sensation levels. To improve predictive accuracy, the SII needed to be modified. Scaling the index by a multiplicative proficiency factor was found to be inappropriate, and alternative modifications were explored. The data were best fitted using a method that combined the standard level distortion factor (which accounted for decrease in speech intelligibility at high presentation levels based on measurements of normal-hearing people) with individual frequency-dependent proficiency. This method was evaluated using broadband sentences and nonsense syllables tests. Results indicate that audibility cannot adequately explain speech recognition of many hearing-impaired listeners. Considerable variations from audibility-based predictions remained, especially for people with severe losses listening at high sensation levels. The data suggest that, contrary to the basis of the SII, information contained in each frequency band is not strictly additive. The data also suggest that for people with severe or profound losses at the high frequencies, amplification should only achieve a low or zero sensation level at this region, contrary to the implications of the unmodified SII.
The recognition of vowels produced by men, women, boys, and girls by cochlear implant patients using a six-channel CIS processor103(1998); http://dx.doi.org/10.1121/1.421248View Description Hide Description
Five patients who used a six-channel, continuous interleaved sampling (CIS)cochlear implant were presented vowels, in two experiments, from a large sample of men, women, boys, and girls for identification. At issue in the first experiment was whether vowels from one speaker group, i.e., men, were more identifiable than vowels from other speaker groups. At issue in the second experiment was the role of the fifth and sixth channels in the identification of vowels from the different speaker groups. It was found in experiment 1 that (i) the vowels produced by men were easier to identify than vowels produced by any of the other speaker groups, (ii) vowels from women and boys were more difficult to identify than vowels from men but less difficult than vowels from girls, and (iii) vowels from girls were more difficult to identify than vowels from all other groups. In experiment 2 removal of channels 5 and 6 from the processor impaired the identification of vowels produced by women, boys and girls but did not impair the identification of vowels produced by men. The results of experiment 1 demonstrate that scores on tests of vowels produced by men overestimate the ability of patients to recognize vowels in the broader context of multi-talker communication. The results of experiment 2 demonstrate that channels 5 and 6 become more important for vowel recognition as the second formants of the speakers increase in frequency.
103(1998); http://dx.doi.org/10.1121/1.421249View Description Hide Description
When two vowels are presented simultaneously, listeners can report their phonemic identities more accurately if their fundamental frequencies (’s) are different rather than the same. If the difference is large, listeners hear two vowels on different pitches; if the is small the vowels are identified less accurately and they do not evoke different pitches. The present study used a matching task to obtain judgments of the pitches evoked by “double vowels” created from pairwise combinations of steady-state synthetic vowels /i/, /ɑ/, /u/, /æ/, and /ɚ/. One was always 100 Hz; the other was either 0, 0.25, 0.5, 1, 2, or 4 semitones higher. Experienced listeners adjusted the of a tone complex to assign pitch matches to 50-ms or 200-ms double vowels. For ’s up to two semitones, listeners’ matches formed a single cluster in the frequency region spanned by the two ’s. When the was 4 semitones, the matches generally formed two clusters close to the of each vowel, suggesting that listeners perceive two distinct pitches when the is 4 semitones but only one clear pitch (possibly accompanied by one or more weaker pitches) with smaller ’s. When the duration was reduced from 200 ms to 50 ms, only a subset of the vowel pairs with a of 4 semitones produced a bimodal distribution of matches. In general, 50-ms stimuli were matched less consistently than their 200-ms counterparts, indicating that the pitches of concurrent vowels emerge less clearly when the stimuli are brief. Comparisons of pitch and vowel identification data revealed a moderate correlation between match intervals (defined as the absolute frequency difference between first and second pitch matches) and identification accuracy for the 200-ms stimuli with the largest of 4 semitones. The link between match intervals and vowel identification was weak or absent in conditions where the stimuli evoked only one pitch.
Language, context, and speaker effects in the identification and discrimination of English /r/ and /l/ by Japanese and Korean listeners103(1998); http://dx.doi.org/10.1121/1.421225View Description Hide Description
Japanese and Korean listeners’ identification and discrimination of English /r/ and /l/ were compared using a common set of minimal pair stimuli. The effects of speakers (two native speakers of Australian English), position of the contrast within the word (word initial, initial consonant cluster, and medial positions), and listening task (forced choice identification versus oddball discrimination) were examined, with a view to assessing the relative importance of language-specific and language-independent factors operating at the acoustic–phonetic and phonological levels of signal processing in “foreign sound” speech perception. Both prior phonological learning and the relative acoustic discriminability of the items affected subjects’ performance on the identification test. Where both factors were engaged, phonological learning effects predominated over the effects of acoustic discriminability. The extent to which a speaker encoded critical acoustic cues for the /r–l/ distinction was found to affect /r–l/ identification. Dynamic spectral features known to be relevant for the /r–l/ contrast were effective in predicting (in a linear regression analysis) speaker-dependent differences in identification scores. Although the discrimination test may have been influenced by ceiling effects, the performance profiles on the identification and discrimination tests were quite different, indicating that the identification and discrimination tests imposed quite different task demands upon listeners and that phonological processing of the signal was more engaged by the former task.