Cycle-consistency Training for End-to-end Speech Recognition

Takaaki Hori, Ramon Astudillo, Tomoki Hayashi, Yu Zhang, Shinji Watanabe, Jonathan Le Roux

研究成果: Conference contribution

1 引用 (Scopus)

抄録

This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7% from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.

元の言語English
ホスト出版物のタイトル2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
出版者Institute of Electrical and Electronics Engineers Inc.
ページ6271-6275
ページ数5
ISBN(電子版)9781479981311
DOI
出版物ステータスPublished - 2019 5 1
イベント44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
継続期間: 2019 5 122019 5 17

出版物シリーズ

名前ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2019-May
ISSN(印刷物)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
United Kingdom
Brighton
期間19/5/1219/5/17

Fingerprint

Speech recognition
Transcription
Glossaries

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

これを引用

Hori, T., Astudillo, R., Hayashi, T., Zhang, Y., Watanabe, S., & Le Roux, J. (2019). Cycle-consistency Training for End-to-end Speech Recognition. : 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings (pp. 6271-6275). [8683307] (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; 巻数 2019-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2019.8683307

Cycle-consistency Training for End-to-end Speech Recognition. / Hori, Takaaki; Astudillo, Ramon; Hayashi, Tomoki; Zhang, Yu; Watanabe, Shinji; Le Roux, Jonathan.

2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 6271-6275 8683307 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; 巻 2019-May).

研究成果: Conference contribution

Hori, T, Astudillo, R, Hayashi, T, Zhang, Y, Watanabe, S & Le Roux, J 2019, Cycle-consistency Training for End-to-end Speech Recognition. : 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings., 8683307, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 巻. 2019-May, Institute of Electrical and Electronics Engineers Inc., pp. 6271-6275, 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019, Brighton, United Kingdom, 19/5/12. https://doi.org/10.1109/ICASSP.2019.8683307
Hori T, Astudillo R, Hayashi T, Zhang Y, Watanabe S, Le Roux J. Cycle-consistency Training for End-to-end Speech Recognition. : 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 6271-6275. 8683307. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). https://doi.org/10.1109/ICASSP.2019.8683307
Hori, Takaaki ; Astudillo, Ramon ; Hayashi, Tomoki ; Zhang, Yu ; Watanabe, Shinji ; Le Roux, Jonathan. / Cycle-consistency Training for End-to-end Speech Recognition. 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 6271-6275 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
@inproceedings{c63683d805154c6eb0440b577b1a416f,
title = "Cycle-consistency Training for End-to-end Speech Recognition",
abstract = "This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7{\%} from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.",
keywords = "cycle consistency, end-to-end, speech recognition, unpaired data",
author = "Takaaki Hori and Ramon Astudillo and Tomoki Hayashi and Yu Zhang and Shinji Watanabe and {Le Roux}, Jonathan",
year = "2019",
month = "5",
day = "1",
doi = "10.1109/ICASSP.2019.8683307",
language = "English",
series = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "6271--6275",
booktitle = "2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings",

}

TY - GEN

T1 - Cycle-consistency Training for End-to-end Speech Recognition

AU - Hori, Takaaki

AU - Astudillo, Ramon

AU - Hayashi, Tomoki

AU - Zhang, Yu

AU - Watanabe, Shinji

AU - Le Roux, Jonathan

PY - 2019/5/1

Y1 - 2019/5/1

N2 - This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7% from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.

AB - This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7% from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.

KW - cycle consistency

KW - end-to-end

KW - speech recognition

KW - unpaired data

UR - http://www.scopus.com/inward/record.url?scp=85065924507&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065924507&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2019.8683307

DO - 10.1109/ICASSP.2019.8683307

M3 - Conference contribution

T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

SP - 6271

EP - 6275

BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -