Index of content:
Volume 111, Issue 2, February 2002
- SPEECH PROCESSING AND COMMUNICATION SYSTEMS 
111(2002); http://dx.doi.org/10.1121/1.1427666View Description Hide Description
The problem of implementing a detector for stop consonants in continuously spoken speech is considered. The problem is posed as one of finding an optimal filter (linear or nonlinear) that operates on a particular appropriately chosen representation, and ideally outputs a 1 when a stop occurs and 0 otherwise. The performance of several variants of a canonical stop detector is discussed and its implications for human and machine speech recognition is considered.
111(2002); http://dx.doi.org/10.1121/1.1433815View Description Hide Description
This paper describes an application of the multichannel signal processing technique of adaptive decorrelation filtering to the design of an assistive listening system. A simulated “dinner table” scenario was studied. The speech signal of a desired talker was corrupted by three simultaneous speech jammers and by a speech-shaped diffusive noise. The technique of adaptive decorrelation filtering processing was used to extract the desired speech from the interference speech and noise. The effectiveness of the assistive listening system was evaluated by observing improvements in A-weighted signal-to-noise ratio (SNR) and in sentence intelligibility, where the latter was evaluated in a listening test with eight normal hearing subjects and three subjects with hearing impairments. Significant improvements in SNR and sentence intelligibility were achieved with the use of the assistive listening system. For subjects with normal hearing, the speech reception threshold was improved by 3 to 5 dBA, and for subjects with hearing impairments, the threshold was improved by 4 to 8 dBA.
An overlapping-feature-based phonological model incorporating linguistic constraints: Applications to speech recognition111(2002); http://dx.doi.org/10.1121/1.1420380View Description Hide Description
Modeling phonological units of speech is a critical issue in speech recognition. In this paper, our recent development of an overlapping-feature-based phonological model that represents long-span contextual dependency in speech acoustics is reported. In this model, high-level linguistic constraints are incorporated in automatic construction of the patterns of feature-overlapping and of the hidden Markovmodel (HMM) states induced by such patterns. The main linguistic information explored includes word and phrase boundaries, morpheme, syllable, syllable constituent categories, and word stress. A consistent computational framework developed for the construction of the feature-based model and the major components of the model are described. Experimental results on the use of the overlapping-feature model in an HMM-based system for speech recognition show improvements over the conventional triphone-based phonological model.