Index of content:
Volume 103, Issue 5, May 1998
- SPEECH PERCEPTION 
103(1998); http://dx.doi.org/10.1121/1.422787View Description Hide Description
This study examined the effect of sentence context and local acoustic structure on phoneme categorization. Target stimuli from a 10-step GOAT–COAT continuum, differing only on a temporal cue for voice onset time (VOT), were embedded in carrier sentences that biased interpretation toward either “goat” or “coat.” While subjects listened to the sentences they also responded as quickly as possible to a visual probe by indicating whether the probe matched the target stimulus they heard. Results showed that the interaction of VOT and sentence context significantly affected both identification and RT for stimuli near the perceptual boundary; the identification function showed a boundary shift in favor of the biased context and peak response times for each context reflected the shifted identification boundaries. In addition, response times were faster for identification of stimuli near the category boundary when responses were congruent, rather than incongruent with the sentence context. The response time differences for congruent versus incongruent responses in the boundary region are interpreted as depending on the results of initial phonological analysis; potentially ambiguous categorizations may be subject to additional evaluation in which a context-congruent response is both preferred and available earlier.
Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration103(1998); http://dx.doi.org/10.1121/1.422788View Description Hide Description
Factors leading to variability in auditory-visual (AV) speech recognition include the subject’s ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing+manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.