Index of content:
Volume 119, Issue 5, May 2006
- SPEECH PERCEPTION 
119(2006); http://dx.doi.org/10.1121/1.2188377View Description Hide Description
The ability to integrate and weight information across dimensions is central to perception and is particularly important for speech categorization. The present experiments investigate cue weighting by training participants to categorize sounds drawn from a two-dimensional acoustic space defined by the center frequency (CF) and modulation frequency (MF) of frequency-modulated sine waves. These dimensions were psychophysically matched to be equally discriminable and, in the first experiment, were equally informative for accurate categorization. Nevertheless, listeners’ category responses reflected a bias for use of CF. This bias remained even when the informativeness of CF was decreased by shifting distributions to create more overlap in CF. A reversal of weighting (MF over CF) was obtained when distribution variance was increased for CF. These results demonstrate that even when equally informative and discriminable, acoustic cues are not necessarily equally weighted in categorization; listeners exhibit biases when integrating multiple acoustic dimensions. Moreover, changes in weighting strategies can be affected by changes in input distribution parameters. This methodology provides potential insights into acquisition of speechsound categories, particularly second language categories. One implication is that ineffective cue weighting strategies for phonetic categories may be alleviated by manipulating variance of uninformative dimensions in training stimuli.
119(2006); http://dx.doi.org/10.1121/1.2184289View Description Hide Description
It is uncertain from previous research to what extent the perceptual system retains plasticity after attunement to the native language (L1) sound system. This study evaluated second-language (L2) voweldiscrimination by individuals who began learning the L2 as children (“early learners”). Experiment 1 identified procedures that lowered discrimination scores for foreign vowelcontrasts in an AXB test (with three physically different stimuli per trial, where “X” was drawn from the same vowel category as “A” or “B”). Experiment 2 examined the AXB discrimination of English vowels by native Spanish early learners and monolingual speakers of Spanish and English (20 per group) at interstimulus intervals (ISIs) of 1000 and . The Spanish monolinguals obtained near-chance scores for three difficult vowelcontrasts, presumably because they did not perceive the vowels as distinct phonemes and because the experimental design hindered low-level encoding strategies. Like the English monolinguals, the early learners obtained high scores, indicating they had shown considerable perceptuallearning. However, statistically significant differences between early learners and English monolinguals for two of three difficult contrasts at the 0-ms ISI suggested that their underlying perceptual systems were not identical. Implications for claims regarding perceptualplasticity following L1 attunement are discussed.
Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants119(2006); http://dx.doi.org/10.1121/1.2188917View Description Hide Description
We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phoneticinformation, though familiarity with phonetic detail in the non-native language can also improve listening performance.
119(2006); http://dx.doi.org/10.1121/1.2188688View Description Hide Description
Previous investigations have suggested that hearing-impaired (HI) listeners have reduced masking release (MR) compared to normal hearing listeners (NH) when they listen in modulated noise. The current study examined the following questions that have not been clearly answered: First, when HI listeners are amplified so that their performance is equal to that of NH listeners in quiet and in steady noise, do HI listeners still show reduced MR with modulated noise when compared to NH listeners? Second, is the masking release the same for sentences and CV syllables? Third, does forward masking significantly contribute to the variability in performance among HI listeners? To compensate for reduced hearing sensitivity for HI listeners, the spectrum levels of both speech and noise were adjusted based on the individual hearing loss. There was no significant difference between the performance of NH listeners and that of HI listeners in steady noise and in quiet. However, the amount of MR for sentences and for CV syllables was significantly reduced for HI listeners. For sentence recognition, the amount of MR seemed to be more related to hearing sensitivity for low-to-mid frequencies than to forward masking. In contrast, forward masking thresholds appear to be a major contributor to the amount of MR for syllable recognition.