Index of content:
Volume 134, Issue 3, September 2013
- PHYSIOLOGICAL ACOUSTICS 
134(2013); http://dx.doi.org/10.1121/1.4816493View Description Hide Description
Recordings of transient-evoked otoacoustic emissions (TEOAEs) suffer from two main sources of contamination: Random noise and the stimulus artifact. The stimulus artifact can be substantially reduced by using a derived non-linear recording paradigm. Three such paradigms are analyzed, called here the level derived non-linear (LDNL), the double-evoked (DE), and the rate derived non-linear (RDNL) paradigms. While these methods successfully reduce the stimulus artifact, they lead to an increase in contamination by random noise. In this study, the signal-to-noise ratio (SNR) achievable by these three paradigms is compared using a common theoretical framework. This analysis also allows the optimization of the parameters of the RDNL paradigm to achieve the maximum SNR. Calculations based on the analysis with typical parameters used in practice suggest that when ranked in terms of their SNR for a given averaging time, RDNL performs best followed by the LDNL and DE paradigms.
Short-latency transient-evoked otoacoustic emissions as predictors of hearing status and thresholdsa)134(2013); http://dx.doi.org/10.1121/1.4817831View Description Hide Description
Estimating audiometric thresholds using objective measures can be clinically useful when reliable behavioral information cannot be obtained. Transient-evoked otoacoustic emissions (TEOAEs) are effective for determining hearing status (normal hearing vs hearing loss), but previous studies have found them less useful for predicting audiometric thresholds. Recent work has demonstrated the presence of short-latency TEOAE components in normal-hearing ears, which have typically been eliminated from the analyses used in previous studies. The current study investigated the ability of short-latency components to predict hearing status and thresholds from 1–4 kHz. TEOAEs were measured in 77 adult ears with thresholds ranging from normal hearing to moderate sensorineural hearing loss. Emissions were bandpass filtered at center frequencies from 1 to 4 kHz. TEOAE waveforms were analyzed within two time windows that contained either short- or long-latency components. Waveforms were quantified by root-mean-square amplitude. Long-latency components were better overall predictors of hearing status and thresholds, relative to short-latency components. There were no significant improvements in predictions when short-latency components were included with long-latency components in multivariate analyses. The results showed that short-latency TEOAE components, as analyzed in the current study, were less predictive of both hearing status and thresholds from 1–4 kHz than long-latency components.