Index of content:
Volume 111, Issue 4, April 2002
- PSYCHOLOGICAL ACOUSTICS 
Rhythmic masking release: Contribution of cues for perceptual organization to the cross-spectral fusion of concurrent narrow-band noises111(2002); http://dx.doi.org/10.1121/1.1453450View Description Hide Description
The contribution of temporal asynchrony, spatial separation, and frequency separation to the cross-spectral fusion of temporally contiguous brief narrow-band noise bursts was studied using the Rhythmic Masking Release paradigm (RMR). RMR involves the discrimination of one of two possible rhythms, despite perceptual masking of the rhythm by an irregular sequence of sounds identical to the rhythmic bursts, interleaved among them. The release of the rhythm from masking can be induced by causing the fusion of the irregular interfering sounds with concurrent “flanking” sounds situated in different frequency regions. The accuracy and the rated clarity of the identified rhythm in a 2-AFC procedure were employed to estimate the degree of fusion of the interferring sounds with flanking sounds. The results suggest that while synchrony fully fuses short-duration noise bursts across frequency and across space (i.e., across ears and loudspeakers), an asynchrony of 20–40 ms produces no fusion. Intermediate asynchronies of 10–20 ms produce partial fusion, where the presence of other cues is critical for unambiguous grouping. Though frequency and spatial separation reduced fusion, neither of these manipulations was sufficient to abolish it. For the parameters varied in this study, stimulus onset asynchrony was the dominant cue determining fusion, but there were additive effects of the other cues. Temporal synchrony appears to be critical in determining whether brief sounds with abrupt onsets and offsets are heard as one event or more than one.
111(2002); http://dx.doi.org/10.1121/1.1458027View Description Hide Description
In most naturally occurring situations, multiple acoustic properties of the sound reaching a listener’s ears change as sound source distance changes. Because many of these acoustic properties, or cues, can be confounded with variation in the acoustic properties of the source and the environment, the perceptual processes subserving distance localization likely combine and weight multiple cues in order to produce stable estimates of sound source distance. Here, this cue-weighting process is examined psychophysically, using a method of virtual acoustics that allows precise measurement and control of the acoustic cues thought to be salient for distance perception in a representative large-room environment. Though listeners’ judgments of sound source distance are found to consistently and exponentially underestimate true distance, the perceptual weight assigned to two primary distance cues (intensity and direct-to-reverberant energy ratio) varies substantially as a function of both sound source type (noise and speech) and angular position (0° and 90° relative to the median plane). These results suggest that the cue-weighting process is flexible, and able to adapt to individual distance cues that vary as a result of sourceproperties and environmental conditions.