Investigating the impact of automated transcripts on non-native speakers' listening comprehension

Xun Cao, Naomi Yamashita, Toru Ishida

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Real-Time transcripts generated by automatic speech recognition (ASR) technologies hold potential to facilitate non-native speakers' (NNSs) listening comprehension. While introducing another modality (i.e., ASR transcripts) to NNSs provides supplemental information to understand speech, it also runs the risk of overwhelming them with excessive information. The aim of this paper is to understand the advantages and disadvantages of presenting ASR transcripts to NNSs and to study how such transcripts affect listening experiences. To explore these issues, we conducted a laboratory experiment with 20 NNSs who engaged in two listening tasks in different conditions: Audio only and audio+ASR transcripts. In each condition, the participants described the comprehension problems they encountered while listening. From the analysis, we found that ASR transcripts helped NNSs solve certain problems (e.g., "do not recognize words they know"), but imperfect ASR transcripts (e.g., errors and no punctuation) sometimes confused them and even generated new problems. Furthermore, post-Task interviews and gaze analysis of the participants revealed that NNSs did not have enough time to fully exploit the transcripts. For example, NNSs had difficulty shifting between multimodal contents. Based on our findings, we discuss the implications for designing better multimodal interfaces for NNSs.

Original languageEnglish
Title of host publicationICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction
EditorsCatherine Pelachaud, Yukiko I. Nakano, Toyoaki Nishida, Carlos Busso, Louis-Philippe Morency, Elisabeth Andre
PublisherAssociation for Computing Machinery, Inc
Pages121-128
Number of pages8
ISBN (Electronic)9781450345569
DOIs
Publication statusPublished - 2016 Oct 31
Externally publishedYes
Event18th ACM International Conference on Multimodal Interaction, ICMI 2016 - Tokyo, Japan
Duration: 2016 Nov 122016 Nov 16

Publication series

NameICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction

Conference

Conference18th ACM International Conference on Multimodal Interaction, ICMI 2016
CountryJapan
CityTokyo
Period16/11/1216/11/16

    Fingerprint

Keywords

  • Automatic speech recognition (ASR) transcripts
  • Eye gaze
  • Listening comprehension problems
  • Non-native speakers (NNSS)

ASJC Scopus subject areas

  • Computer Science Applications
  • Human-Computer Interaction
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition

Cite this

Cao, X., Yamashita, N., & Ishida, T. (2016). Investigating the impact of automated transcripts on non-native speakers' listening comprehension. In C. Pelachaud, Y. I. Nakano, T. Nishida, C. Busso, L-P. Morency, & E. Andre (Eds.), ICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction (pp. 121-128). (ICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction). Association for Computing Machinery, Inc. https://doi.org/10.1145/2993148.2993161