Index of content:
Volume 120, Issue 4, October 2006
- SPEECH PERCEPTION 
Contribution of low-frequency acoustic information to Chinese speech recognition in cochlear implant simulations120(2006); http://dx.doi.org/10.1121/1.2336990View Description Hide Description
Chinese sentence recognition strongly relates to the reception of tonal information. For cochlear implant (CI) users with residual acoustic hearing, tonal information may be enhanced by restoring low-frequency acoustic cues in the nonimplanted ear. The present study investigated the contribution of low-frequency acoustic information to Chinese speech recognition in Mandarin-speaking normal-hearing subjects listening to acoustic simulations of bilaterally combined electric and acoustic hearing. Subjects listened to a 6-channel CI simulation in one ear and low-pass filtered speech in the other ear. Chinese tone, phoneme, and sentence recognition were measured in steady-state, speech-shaped noise, as a function of the cutoff frequency for low-pass filtered speech. Results showed that low-frequency acoustic information below contributed most strongly to tone recognition, while low-frequency acoustic information above contributed most strongly to phoneme recognition. For Chinese sentences, speech reception thresholds (SRTs) improved with increasing amounts of low-frequency acoustic information, and significantly improved when low-frequency acoustic information above was preserved. SRTs were not significantly affected by the degree of spectral overlap between the CI simulation and low-pass filtered speech. These results suggest that, for CI patients with residual acoustic hearing, preserving low-frequency acoustic information can improve Chinese speech recognition in noise.
120(2006); http://dx.doi.org/10.1121/1.2335422View Description Hide Description
The distribution of energy across the noise spectrum provides the primary cues for the identification of a fricative.Formant transitions have been reported to play a role in identification of some fricatives, but the combined results so far are conflicting. We report five experiments testing the hypothesis that listeners differ in their use of formant transitions as a function of the presence of spectrally similar fricatives in their native language. Dutch, English, German, Polish, and Spanish native listeners performed phoneme monitoring experiments with pseudowords containing either coherent or misleading formant transitions for the fricatives and . Listeners of German and Dutch, both languages without spectrally similar fricatives, were not affected by the misleading formant transitions. Listeners of the remaining languages were misled by incorrect formant transitions. In an untimed labeling experiment both Dutch and Spanish listeners provided goodness ratings that revealed sensitivity to the acoustic manipulation. We conclude that all listeners may be sensitive to mismatching information at a low auditory level, but that they do not necessarily take full advantage of all available systematic acoustic variation when identifying phonemes. Formant transitions may be most useful for listeners of languages with spectrally similar fricatives.
120(2006); http://dx.doi.org/10.1121/1.2338285View Description Hide Description
This study explored sensitivity to word-level phonotactic patterns in English and Japanese monolingual infants. Infants at the ages of 6, 12, and were tested on their ability to discriminate between test words using a habituation-switch experimental paradigm. All of the test words, neek, neeks, and neekusu, are phonotactically legitimate for English, whereas the first two words are critically noncanonical in Japanese. The language-specific phonotactical congruence influenced infants’ performance in discrimination. English-learning infants could discriminate between neek and neeks at the age of , but Japanese infants could not. There was a similar developmental pattern for infants of both language groups for discrimination of neek and neeks, but Japanese infants showed a different trajectory from English infants for neekusu/neeks. These differences reflect the different status of these word patterns with respect to the phonotactics of both languages, and reveal early sensitivity to subtle phonotactic and language input patterns in each language.
Perception of native and non-native affricate-fricative contrasts: Cross-language tests on adults and infants120(2006); http://dx.doi.org/10.1121/1.2338290View Description Hide Description
Previous studies have shown improved sensitivity to native-language contrasts and reduced sensitivity to non-native phoneticcontrasts when comparing 6–8 and -old infants. This developmental pattern is interpreted as reflecting the onset of language-specific processing around the first birthday. However, generalization of this finding is limited by the fact that studies have yielded inconsistent results and that insufficient numbers of phoneticcontrasts have been tested developmentally; this is especially true for native-language phoneticcontrasts. Three experiments assessed the effects of language experience on affricate-fricative contrasts in a cross-language study of English and Mandarin adults and infants. Experiment 1 showed that English-speaking adults score lower than Mandarin-speaking adults on Mandarin alveolo-palatal affricate-fricative discrimination. Experiment 2 examined developmental change in the discrimination of this contrast in English- and Mandarin-leaning infants between 6 and of age. The results demonstrated that native-language performance significantly improved with age while performance on the non-native contrast decreased. Experiment 3 replicated the perceptual improvement for a native contrast: 6–8 and -old English-learning infants showed a performance increase at the older age. The results add to our knowledge of the developmental patterns of native and non-native phonetic perception.
Factors affecting masking release for speech in modulated noise for normal-hearing and hearing-impaired listeners120(2006); http://dx.doi.org/10.1121/1.2266530View Description Hide Description
The Speech Reception Threshold for sentences in stationary noise and in several amplitude-modulated noises was measured for 8 normal-hearing listeners, 29 sensorineural hearing-impaired listeners, and 16 normal-hearing listeners with simulated hearing loss. This approach makes it possible to determine whether the reduced benefit from masker modulations, as often observed for hearing-impaired listeners, is due to a loss of signal audibility, or due to suprathreshold deficits, such as reduced spectral and temporal resolution, which were measured in four separate psychophysical tasks. Results show that the reduced masking release can only partly be accounted for by reduced audibility, and that, when considering suprathreshold deficits, the normal effects associated with a raised presentation level should be taken into account. In this perspective, reduced spectral resolution does not appear to qualify as an actual suprathreshold deficit, while reduced temporal resolution does. Temporal resolution and age are shown to be the main factors governing masking release for speech in modulated noise, accounting for more than half of the intersubject variance. Their influence appears to be related to the processing of mainly the higher stimulus frequencies. Results based on calculations of the Speech Intelligibility Index in modulated noise confirm these conclusions.