1887
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
oa
Reduced efficiency of audiovisual integration for nonnative speech
Rent:
Rent this article for
Access full text Article
/content/asa/journal/jasa/134/5/10.1121/1.4822320
1.
1. Bates, D. , Maechler, M. , and Bolker, B. (2012). “lme4: Linear mixed-effects models using S4 classes” [computer program].
2.
2. Bradlow, A. R. , and Alexander, J. A. (2007). “ Semantic and phonetic enhancements for speech-in-noise recognition by native and nonnative listeners,” J. Acoust. Soc. Am. 121(4), 23392349.
http://dx.doi.org/10.1121/1.2642103
3.
3. Calandruccio, L. , and Smiljanic, R. (2012). “ New sentence recognition materials developed using a basic nonnative English lexicon,” J. Speech. Lang. Hear. R. 55(5), 13421355.
http://dx.doi.org/10.1044/1092-4388(2012/11-0260)
4.
4. Derwing, T. M. , and Munro, M. J. (2009). “ Putting accent in its place: Rethinking obstacles to communication,” Lang. Teach. 42(04), 476490.
http://dx.doi.org/10.1017/S026144480800551X
5.
5. Devos, T. , and Banaji, M. R. (2005). “ American = White?,” J. Pers. Soc. Psychol. 88(3), 447466.
6.
6. Drager, K. (2010). “ Sociophonetic variation in speech perception,” Lang. Linguist. Compass. 4(7), 473480.
http://dx.doi.org/10.1111/j.1749-818X.2010.00210.x
7.
8. Grant, K. W. , and Seitz, P. F. (2000). “ The use of visible speech cues for improving auditory detection of spoken sentences,” J. Acoust. Soc. Am. 108(3), 11971208.
http://dx.doi.org/10.1121/1.1288668
8.
9. Hardison, D. (2003). “ Acquisition of second-language speech: Effects of visual cues, context, and talker variability,” Appl. Psycholinguist. 24(4), 495522.
9.
10. Harry, B. , Davis, C. , and Kim, J. (2012). “ Subliminal access to abstract face representations does not rely on attention,” Conscious. Cogn. 21(1), 573583.
http://dx.doi.org/10.1016/j.concog.2011.11.007
10.
11. Hazan, V. , Kim, J. , and Chen, Y. (2010). “ Audiovisual perception in adverse conditions: Language, speaker and listener effects,” Speech Commun. 52(11), 9961009.
http://dx.doi.org/10.1016/j.specom.2010.05.003
11.
12. Irwin, A. , Pilling, M. , and Thomas, S. M. (2011). “ An analysis of British regional accent and contextual cue effects on speechreading performance,” Speech Commun. 53(6), 807817.
http://dx.doi.org/10.1016/j.specom.2011.01.010
12.
13. Lander, K. , and Capek, C. (2013). “ Investigating the impact of lip visibility and talking style on speechreading performance,” Speech Commun. 55(5), 600605.
http://dx.doi.org/10.1016/j.specom.2013.01.003
13.
14. McQueen, J. M. , Norris, D. , and Cutler, A. (2006). “ The dynamic nature of speech perception,” Lang. Speech. 49(1), 101112.
http://dx.doi.org/10.1177/00238309060490010601
14.
15. Minear, M. , and Park, D. C. (2004). “ A lifespan database of adult facial stimuli,” Behav. Res. Methods Instrum. Comput. 36(4), 630633.
http://dx.doi.org/10.3758/BF03206543
15.
16. Smiljanic, R. , and Bradlow, A. R. (2011). “ Bidirectional clear speech perception benefit for native and high-proficiency nonnative talkers and listeners: Intelligibility and accentedness,” J. Acoust. Soc. Am. 130(6), 40204031.
http://dx.doi.org/10.1121/1.3652882
16.
17. Sommers, M. S. , Tye-Murray, N. , and Spehar, B. (2005). “ Auditory-visual speech perception and auditory-visual enhancement in normal-hearing younger and older adults,” Ear Hear. 26, 263275.
http://dx.doi.org/10.1097/00003446-200506000-00003
17.
18. Sumby, W. H. , and Pollack, I. (1954). “ Visual contribution to speech intelligibility in noise,” J. Acoust. Soc. Am. 26(2), 212215.
http://dx.doi.org/10.1121/1.1907309
18.
19. Van Engen, K. J. , Baese-Berk, M. , Baker, R. E. , Choi, A. , Kim, M. , and Bradlow, A. R. (2010). “ The Wildcat Corpus of native- and foreign-accented English: Communicative efficiency across conversational dyads with varying language alignment profiles,” Lang. Speech.53, 510540.
19.
20. Wang, Y. , Behne, D. M. , and Jiang, H. (2009). “ Influence of native language phonetic system on audio-visual speech perception,” J. Phon. 37(3), 344356.
http://dx.doi.org/10.1016/j.wocn.2009.04.002
http://aip.metastore.ingenta.com/content/asa/journal/jasa/134/5/10.1121/1.4822320
Loading

Figures

Image of FIG. 1.

Click to view

FIG. 1.

(Color online) (a) Visual (upper panel) and auditory (lower panel) speech cues of the sentence “the girl loved the sweet coffee” produced by native and non-native speakers. The sample AV stimuli are available as supplementary materials ( Mm. 1 ; Mm. 2 ). (b) Percentage of the keywords correctly identified for the speech perception in noise task for English (left bars) and Korean (right bars) speakers, without (darker fill) and with (lighter fill) visual cues. (c) Visual enhancement measures [(AV−AO)/(1−AO)] compared between native English and Korean speakers.

Image of FIG. 2.

Click to view

FIG. 2.

(Color online) Implicit association test. (a) Face (ten Caucasian; ten Asian) and scene (ten American; ten foreign) images were presented. In the congruous condition, participants were instructed to group Caucasian faces and American scenes together, and Asian faces and foreign scenes together. In the incongruous condition, participants were instructed to group Caucasian faces and foreign scenes together, and Asian faces and American scenes together. (b) IAT scores and the native boost when visual cues are available positively correlate with each other, (17) = 0.482,  = 0.037, 2 = 0.23.

video/mp4,video/x-flv,video/flv,audio.mp3,audio.mpeg

Multimedia

The following multimedia file is available:

Sample AV stimulus produced by a native English speaker. This file has been downsampled from the original format (video: 29.97 fps, 1920 × 1080, , 4.2 MB/s; audio: 22 050 Hz, 16-bit). This is a file of type “avi” (3.5 MB).

The following multimedia file is available:

Sample AV stimulus produced by a native Korean speaker. This is a file of type “avi” (2.7 MB).

Loading

Article metrics loading...

/content/asa/journal/jasa/134/5/10.1121/1.4822320
2013-10-10
2014-04-21

Abstract

The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.

Loading

Full text loading...

/deliver/fulltext/asa/journal/jasa/134/5/1.4822320.html;jsessionid=13c7ou7im4jkv.x-aip-live-06?itemId=/content/asa/journal/jasa/134/5/10.1121/1.4822320&mimeType=html&fmt=ahah&containerItemId=content/asa/journal/jasa
true
true
This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: Reduced efficiency of audiovisual integration for nonnative speech
http://aip.metastore.ingenta.com/content/asa/journal/jasa/134/5/10.1121/1.4822320
10.1121/1.4822320
SEARCH_EXPAND_ITEM