Index of content:
Volume 117, Issue 2, February 2005
- PSYCHOLOGICAL ACOUSTICS 
117(2005); http://dx.doi.org/10.1121/1.1850209View Description Hide Description
It is difficult to hear out individually the components of a “chord” of equal-amplitude pure tones with synchronous onsets and offsets. In the present study, this was confirmed using 300-ms random (inharmonic) chords with components at least 1/2 octave apart. Following each chord, after a variable silent delay, listeners were presented with a single pure tone which was either identical to one component of the chord or halfway in frequency between two components. These two types of sequence could not be reliably discriminated from each other. However, it was also found that if the single tone following the chord was instead slightly (e.g., 1/12 octave) lower or higher in frequency than one of its components, the same listeners were sensitive to this relation. They could perceive a pitch shift in the corresponding direction. Thus, it is possible to perceive a shift in a nonperceived frequency/pitch. This paradoxical phenomenon provides psychophysical evidence for the existence of automatic “frequency-shift detectors” in the human auditory system. The data reported here suggest that such detectors operate at an early stage of auditory scene analysis but can be activated by a pair of sounds separated by a few seconds.
117(2005); http://dx.doi.org/10.1121/1.1836832View Description Hide Description
Two experiments compared the effect of supplying visual speechinformation (e.g., lipreading cues) on the ability to hear one female talker’s voice in the presence of steady-state noise or a masking complex consisting of two other female voices. In the first experiment intelligibility of sentences was measured in the presence of the two types of maskers with and without perceived spatial separation of target and masker. The second study tested detection of sentences in the same experimental conditions. Results showed that visual cues provided more benefit for both recognition and detection of speech when the masker consisted of other voices (versus steady-state noise). Moreover, visual cues provided greater benefit when the target speech and masker were spatially coincident versus when they appeared to arise from different spatial locations. The data obtained here are consistent with the hypothesis that lipreading cues help to segregate a target voice from competing voices, in addition to the established benefit of supplementing masked phoneticinformation.