No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Comparing the effects of reverberation and of noise on speech recognition in simulated electric-acoustic listening
1. ANSI (2002). Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools (American National Standards Institute, New York), Pub. no. S12.60-2002.
2. Apoux, F. , Millman, R. E. , Viemeister, N. F. , Brown, C. A. , and Bacon, S. P. (2011). “On the mechanisms involved in the recovery of envelope information from temporal fine structure,” J. Acoust. Soc. Am. 130, 273–282.
3. Ardoint, M. , Sheft, S. , Fleuriot, P. , Garnier, S. , and Lorenzi, C. (2010). “Perception of temporal fine-structure cues in speech with minimal envelope cues for listeners with mild-to-moderate hearing loss,” Int. J. Audiol. 49, 823–831.
4. Arnoldner, C. , Riss, D. , Brunner, M. , Durisin, M. , Baumgartner, W.-D. , and Hamzavi, J.-S. (2007). “Speech and music perception with the new fine structure speech coding strategy: Preliminary results,” Acta Oto-Laryngol. 127, 1298–1303.
5. Berenstein, C. K. , Mens, L. H. M. , Mulder, J. J. S. , and Vanpoucke, F. J. (2008). “Current steering and current focusing in cochlear implants: Comparison of monopolar, tripolar, and virtual channel electrode configurations,” Ear Hear. 29, 250–260.
8. Brown, C. A. , and Bacon, S. P. (2009b). “Low-frequency speech cues and simulated electric- acoustic hearing,” J. Acoust. Soc. Am. 125, 1658–1665.
10. Brown, G. J. , and Palomaki, K. J. (2006). “Reverberation,” in Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, edited by D. Wang and G. J. Brown (Wiley Interscience, Hoboken, NJ), Chap.7, pp. 209–250.
12. Chatterjee, M. , and Peng, S.-C. (2008). “Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition,” Hear. Res. 235, 143–156.
13. Culling, J. F. , Hodder, K. I. , and Toh, C. Y. (2003). “Effects of reverberation on perceptual segregation of competing voices,” J. Acoust. Soc. Am. 114, 2871–2876.
14. Darwin, C. J. , and Hukin, R. W. (2000). “Effects of reverberation on spatial, prosodic, and vocal-tract size cues to selective attention,” J. Acoust. Soc. Am. 108, 335–342.
15. Donaldson, G. S. , Chisolm, T. H. , Blasco, G. P. , Shinnick, L. J. , Ketter, K. J. , and Krause, J. C. (2009). “BKB-SIN and ANL predict perceived communication ability in cochlear implant users,” Ear Hear. 30, 401–410.
16. Dorman, M. F. , Gifford, R. H. , Spahr, A. J. , and McKarns, S. A. (2008). “The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies,” Audiol. Neurotol. 13, 105–112.
17. Dorman, M. F. , Loizou, P. C. , and Rainey, D. (1997). “Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band output,” J. Acoust. Soc. Am. 102, 2409–2411.
18. Drennan, W. R. , Longnion, J. K. , Ruffin, C. , and Rubinstein, J. T. (2008). “Discrimination of Schroeder-phase harmonic complexes by normal-hearing and cochlear-implant listeners,” JARO 9, 138–149.
20. Firszt, J. B. , Holden, L. K. , Reeder, R. M. , and Skinner, M. W. (2009). “Speech recognition in cochlear implant recipients: Comparison of standard HiRes and HiRes 120 sound processing,” Otol. Neurotol. 30, 146–152.
22. Fitzpatrick, E. M. , Séguin, C. , Schramm, D. , Chenier, J. , and Armstrong, S. (2009). “Users’ experience of a cochlear implant combined with a hearing aid,” Int. J. Audiol. 48, 172–182.
23. Fu, Q.-J. , Chinchilla, S. , Nogaki, G. , and Galvin, J. J. (2005). “Voice gender identification by cochlear implant users: The role of spectral and temporal resolution,” J. Acoust. Soc. Am. 118, 1711–1718.
25. Gantz, B. J. , Turner, C. W. , Gfeller, K. E. , and Lowder, M. W. (2005). “Preservation of hearing in cochlear implant surgery: Advantages of combined electrical and acoustical speech processing,” Laryngoscope 115, 796–802.
26. Gelfand, S. A. , and Silman, S. (1979). “Effects of small room reverberation upon the recognition of some consonant features,” J. Acoust. Soc. Am. 66, 22–29.
27. Geurts, L. , and Wouters, J. (2004). “Better place-coding of the fundamental frequency in cochlear implants,” J. Acoust. Soc. Am. 115, 844–852.
28. Ghitza, O. (2001). “On the upper cutoff frequency of the auditory critical-band envelope detectors in the context of speech perception,” J. Acoust. Soc. Am. 110, 1628–1640.
29. Gifford, R. H. , Dorman, M. F. , McKarns, S. A. , and Spahr, A. J. (2007). “Combined electric and contralateral acoustic hearing: Word and sentence recognition with bimodal hearing,” J. Speech Lang. Hear. Res. 50, 835–843.
31. Gilbert, G. , and Lorenzi, C. (2010). “Role of spectral and temporal cues in restoring missing speech information,” J. Acoust. Soc. Am. 128, 294–299.
32. Han, D. , Liu, B. , Zhou, N. , Chen, X. , Kong, Y. , Liu, H. , Zheng, Y. , and Xu, L. (2009). “Lexical tone perception with HiResolution and HiResolution 120 sound-processing strategies in pediatric Mandarin-speaking cochlear implant users,” Ear Hear. 30, 169–177.
34. Hillenbrand, J. , Getty, L. A. , Clark, M. J. , and Wheeler, K. (1995). “Acoustic characteristics of American English vowels,” J. Acoust. Soc. Am. 97, 3099–3111.
35. Hopkins, K. , and Moore, B. C. J. (2007). “Moderate cochlear hearing loss leads to a reduced ability to use temporal fine structure information,” J. Acoust. Soc. Am. 122, 1055–1068.
36. Hopkins, K. , Moore, B. C. J. , and Stone, M. A. (2008). “Effects of moderate cochlear hearing loss on the ability to benefit from temporal fine structure information in speech,” J. Acoust. Soc. Am. 123, 1140–1153.
37. Houtgast, T. , and Steeneken, H. J. M. (1973). “The modulation transfer function in room acoustics as a predictor of speech intelligibility,” Acustica 28, 66–73.
38. Houtgast, T. , Steeneken, H. J. M. , and Plomp, R. (1980). “Predicting speech intelligibility in rooms from the modulation transfer function. I. General room acoustics,” Acustica 46, 59–72.
39.Institute of Electrical and Electronics Engineers (IEEE) (1969). “IEEE recommended practice for speech quality measurements,” IEEE Trans. Audio. Electroacoust. 17, 225–246.
41. Kokkinakis, K. , and Loizou, P. C. (2011). “The impact of reverberant self-masking and overlap- masking effects on speech intelligibility by cochlear implant listeners,” J. Acoust. Soc. Am. 130, 1099–1102.
42. Kong, Y.-Y. , Stickney, G. S. , and Zeng, F.-G. (2005). “Speech and melody recognition in binaurally combined acoustic and electric hearing,” J. Acoust. Soc. Am. 117, 1351–1361.
44. Lochner, J. P. A. , and Burger, J. F. (1961). “The intelligibility of speech under reverberant conditions,” Acustica 11, 195–200.
46. Lorenzi, C. , Gilbert, G. , Carn, H. , Garnier, S. , and Moore, B. C. J. (2006). “Speech perception problems of the hearing impaired reflect inability to use temporal fine structure,” Proc. Natl. Acad. Sci. USA 103, 18866–18869.
47. McDermott, H. J. , and McKay, C. M. (1994). “Pitch ranking with nonsimultaneous dual-electrode electrical stimulation of the cochlea,” J. Acoust. Soc. Am. 96, 155–162.
49. Moore, B. C. , and Sek, A. (1995). “Effects of carrier frequency, modulation rate, and modulation waveform on the detection of modulation and the discrimination of modulation type (amplitude modulation versus frequency modulation),” J. Acoust. Soc. Am. 97, 2468–2478.
51. Moore, B. C. J. (2008a). “The role of temporal fine structure in normal and impaired hearing,” in Auditory Signal Processing in Hearing-impaired Listeners, 1st International Symposium on Auditory and Audiological Research (ISAAR 2007), edited by T. Dau, J. M. Buchholz, J. M. Harte, and T. U. Christiansen (Centertryk A/S, Denmark).
52. Moore, B. C. J. (2008b). “The role of temporal fine structure processing in pitch perception, masking, and speech perception for normal-hearing and hearing-impaired people,” JARO 9, 399–406.
53. Moore, B. C. J. , and Glasberg, B. R. (1983). “Suggested formulae for calculating auditory-filter bandwidths and excitation patterns,” J. Acoust. Soc. Am. 74, 750–753.
54. Nábe˘lek, A. K. , and Robinette, L. (1978). “Influence of the precedence effect on word identification by normally hearing and hearing-impaired subjects,” J. Acoust. Soc. Am. 63, 187–194.
55. Nelson, P. B. , Jin, S.-H. , Carney, A. E. , and Nelson, D. A. (2003). “Understanding speech in modulated interference: Cochlear implant users and normal-hearing listeners,” J. Acoust. Soc. Am. 113, 961–968.
58. Poissant, S. F. , Whitmal, N. A. , and Freyman, R. L. (2006). “Effects of reverberation and masking on speech intelligibility in cochlear implant simulations,” J. Acoust. Soc. Am. 119, 1606–1615.
59. Qin, M. K. , and Oxenham, A. J. (2003). “Effects of simulated cochlear-implant processing on speech reception in fluctuating maskers,” J. Acoust. Soc. Am. 114, 446–454.
61. Riss, D. , Arnoldner, C. , Reiß, S. , Baumgartner, W.-D. , and Hamzavi, J.-S. (2009). “1-year results using the Opus speech processor with the fine structure speech coding strategy,” Acta Oto-Laryngol. 129, 988–991.
62. Rubinstein, J. T. , Wilson, B. S. , Finley, C. C. , and Abbas, P. J. (1999). “Pseudospontaneous activity: Stochastic independence of auditory nerve fibers with electrical stimulation,” Hear. Res. 127, 108–118.
64. Shannon, R. V. , Cruz, R. J. , and Galvin, J. J. , 3rd (2011). “Effect of stimulation rate on cochlear implant users’ phoneme, word and sentence recognition in quiet and in noise,” Audiol. Neurotol. 16, 113–123.
66. Smith, Z. M. , Delgutte, B. , and Oxenham, A. J. (2002). “Chimaeric sounds reveal dichotomies in auditory perception,” Nature 416, 87–90.
67. Soulodre, G. A. , Popplewell, N. , and Bradley, J. S. (1989). “Combined effects of early reflections and background noise on speech intelligibility,” J. Sound Vib. 135, 123–133.
68. Spahr, A. J. , Dorman, M. F. , and Loiselle, L. H. (2007). “Performance of patients using different cochlear implant systems: Effects of input dynamic range,” Ear Hear. 28, 260–275.
69. Stickney, G. S. , Nie, K. , and Zeng, F.-G. (2005). “Contribution of frequency modulation to speech recognition in noise,” J. Acoust. Soc. Am. 118, 2412–2420.
70. Stone, M. A. , Füllgrabe, C. , and Moore, B. C. J. (2008). “Benefit of high-rate envelope cues in vocoder processing: Effect of number of channels and spectral region,” J. Acoust. Soc. Am. 124, 2272–2282.
71. Traunmüller, H. , and Eriksson, A. (2000). “Acoustic effects of variation in vocal effort by men, women, and children,” J. Acoust. Soc. Am. 107, 3438–3451.
40. von Ilberg, C. , Kiefer, J. , Tillein, J. , Pfenningdorff, T. , Hartmann, R. , Stürzebecher, E. , and Klinke, R. (1999). “Electric-acoustic stimulation of the auditory system. New technology for severe hearing loss,” J. Otorhinolaryngol. Relat. Spec. 61, 334–340.
72. Whitmal, N. A. , Poissant, S. F. , Freyman, R. L. , and Helfer, K. S. (2007). “Speech intelligibility in cochlear implant simulations: Effects of carrier type, interfering noise, and subject experience,” J. Acoust. Soc. Am. 122, 2376–2388.
73. Whitmal, N. A. , and Poissant, S. F. (2009). “The role of early reflections in the perception of spectrally-degraded speech,” presented at the Conference on Implantable Auditory Prostheses, Lake Tahoe, CA.
75. Zhang, T. , Dorman, M. F. , and Spahr, A. J. (2010). “Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation,” Ear Hear. 31, 63–69.
Article metrics loading...
Full text loading...
Most read this month