Supporting non-native speakers’ listening comprehension with automated transcripts

Xun Cao, Naomi Yamashita, Toru Ishida

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.

Original languageEnglish
Title of host publicationCognitive Technologies
PublisherSpringer-Verlag
Pages157-173
Number of pages17
Edition9789811077920
DOIs
Publication statusPublished - 2018 Jan 1
Externally publishedYes

Publication series

NameCognitive Technologies
Number9789811077920
ISSN (Print)1611-2482

Keywords

  • Automatic speech recognition (ASR) transcripts
  • Eye-tracking
  • Listening comprehension problems
  • Non-native speakers (NNSs)

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Supporting non-native speakers’ listening comprehension with automated transcripts'. Together they form a unique fingerprint.

  • Cite this

    Cao, X., Yamashita, N., & Ishida, T. (2018). Supporting non-native speakers’ listening comprehension with automated transcripts. In Cognitive Technologies (9789811077920 ed., pp. 157-173). (Cognitive Technologies; No. 9789811077920). Springer-Verlag. https://doi.org/10.1007/978-981-10-7793-7_10