Supporting non-native speakers’ listening comprehension with automated transcripts

Xun Cao, Naomi Yamashita, Toru Ishida

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.

Original languageEnglish
Title of host publicationCognitive Technologies
PublisherSpringer-Verlag
Pages157-173
Number of pages17
Edition9789811077920
DOIs
Publication statusPublished - 2018 Jan 1
Externally publishedYes

Publication series

NameCognitive Technologies
Number9789811077920
ISSN (Print)1611-2482

Fingerprint

Speech recognition

Keywords

  • Automatic speech recognition (ASR) transcripts
  • Eye-tracking
  • Listening comprehension problems
  • Non-native speakers (NNSs)

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Cao, X., Yamashita, N., & Ishida, T. (2018). Supporting non-native speakers’ listening comprehension with automated transcripts. In Cognitive Technologies (9789811077920 ed., pp. 157-173). (Cognitive Technologies; No. 9789811077920). Springer-Verlag. https://doi.org/10.1007/978-981-10-7793-7_10

Supporting non-native speakers’ listening comprehension with automated transcripts. / Cao, Xun; Yamashita, Naomi; Ishida, Toru.

Cognitive Technologies. 9789811077920. ed. Springer-Verlag, 2018. p. 157-173 (Cognitive Technologies; No. 9789811077920).

Research output: Chapter in Book/Report/Conference proceedingChapter

Cao, X, Yamashita, N & Ishida, T 2018, Supporting non-native speakers’ listening comprehension with automated transcripts. in Cognitive Technologies. 9789811077920 edn, Cognitive Technologies, no. 9789811077920, Springer-Verlag, pp. 157-173. https://doi.org/10.1007/978-981-10-7793-7_10
Cao X, Yamashita N, Ishida T. Supporting non-native speakers’ listening comprehension with automated transcripts. In Cognitive Technologies. 9789811077920 ed. Springer-Verlag. 2018. p. 157-173. (Cognitive Technologies; 9789811077920). https://doi.org/10.1007/978-981-10-7793-7_10
Cao, Xun ; Yamashita, Naomi ; Ishida, Toru. / Supporting non-native speakers’ listening comprehension with automated transcripts. Cognitive Technologies. 9789811077920. ed. Springer-Verlag, 2018. pp. 157-173 (Cognitive Technologies; 9789811077920).
@inbook{7643518f80b54ae09f8464fd2b1f9c61,
title = "Supporting non-native speakers’ listening comprehension with automated transcripts",
abstract = "Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8{\%}) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.",
keywords = "Automatic speech recognition (ASR) transcripts, Eye-tracking, Listening comprehension problems, Non-native speakers (NNSs)",
author = "Xun Cao and Naomi Yamashita and Toru Ishida",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-981-10-7793-7_10",
language = "English",
series = "Cognitive Technologies",
publisher = "Springer-Verlag",
number = "9789811077920",
pages = "157--173",
booktitle = "Cognitive Technologies",
edition = "9789811077920",

}

TY - CHAP

T1 - Supporting non-native speakers’ listening comprehension with automated transcripts

AU - Cao, Xun

AU - Yamashita, Naomi

AU - Ishida, Toru

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.

AB - Various language services exist to support the listening comprehension of non-native speakers (NNSs). One important service is to provide NNSs with real-time transcripts generated by automatic speech recognition (ASR) technologies. The goal of our research is to explore the effects of ASR transcripts on the listening comprehension of NNSs and consider how to support NNSs with ASR transcripts more effectively. To reach our goal, we ran three studies. The first study investigates the comprehension problems faced by NNSs, and the second study examines how ASR transcripts impact their listening comprehension, e.g., what types of comprehension problems could and could not be solved by reading ASR transcripts. Finally, the third study explores the potential of using eye-tracking data to detect their comprehension problems. Our data analysis identified thirteen types of listening comprehension problems. ASR transcripts helped the NNSs solve certain problems, e.g., “failed to recognize words they know.” However, the transcripts did not solve problems such as “lack of vocabulary,” and indeed NNS burden was increased. Results also show that from eye-tracking data we can make reasonably accurate predictions (83.8%) about the types of problems encountered by NNSs. Our findings provide insight into ways of designing real-time adaptive support systems for NNSs.

KW - Automatic speech recognition (ASR) transcripts

KW - Eye-tracking

KW - Listening comprehension problems

KW - Non-native speakers (NNSs)

UR - http://www.scopus.com/inward/record.url?scp=85042565269&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85042565269&partnerID=8YFLogxK

U2 - 10.1007/978-981-10-7793-7_10

DO - 10.1007/978-981-10-7793-7_10

M3 - Chapter

T3 - Cognitive Technologies

SP - 157

EP - 173

BT - Cognitive Technologies

PB - Springer-Verlag

ER -