TY - JOUR
T1 - End-to-End Integration of Speech Recognition, Speech Enhancement, and Self-Supervised Learning Representation
AU - Chang, Xuankai
AU - Maekaku, Takashi
AU - Fujita, Yuya
AU - Watanabe, Shinji
N1 - Funding Information:
This work used the Extreme Science and Engineering Discovery Environment (XSEDE) [36], which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system [37], which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).
Publisher Copyright:
Copyright © 2022 ISCA.
PY - 2022
Y1 - 2022
N2 - This work presents our end-to-end (E2E) automatic speech recognition (ASR) model targetting at robust speech recognition, called Integraded speech Recognition with enhanced speech Input for Self-supervised learning representation (IRIS). Compared with conventional E2E ASR models, the proposed E2E model integrates two important modules including a speech enhancement (SE) module and a self-supervised learning representation (SSLR) module. The SE module enhances the noisy speech. Then the SSLR module extracts features from enhanced speech to be used for speech recognition (ASR). To train the proposed model, we establish an efficient learning scheme. Evaluation results on the monaural CHiME-4 task show that the IRIS model achieves the best performance reported in the literature for the single-channel CHiME-4 benchmark (2.0% for the real development and 3.6% for the real test) thanks to the powerful pre-trained SSLR module and the fine-tuned SE module.
AB - This work presents our end-to-end (E2E) automatic speech recognition (ASR) model targetting at robust speech recognition, called Integraded speech Recognition with enhanced speech Input for Self-supervised learning representation (IRIS). Compared with conventional E2E ASR models, the proposed E2E model integrates two important modules including a speech enhancement (SE) module and a self-supervised learning representation (SSLR) module. The SE module enhances the noisy speech. Then the SSLR module extracts features from enhanced speech to be used for speech recognition (ASR). To train the proposed model, we establish an efficient learning scheme. Evaluation results on the monaural CHiME-4 task show that the IRIS model achieves the best performance reported in the literature for the single-channel CHiME-4 benchmark (2.0% for the real development and 3.6% for the real test) thanks to the powerful pre-trained SSLR module and the fine-tuned SE module.
KW - deep learning
KW - robust automatic speech recognition
KW - self-supervised learning
KW - speech enhancement
UR - http://www.scopus.com/inward/record.url?scp=85140078747&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140078747&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2022-10839
DO - 10.21437/Interspeech.2022-10839
M3 - Conference article
AN - SCOPUS:85140078747
VL - 2022-September
SP - 3819
EP - 3823
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SN - 2308-457X
T2 - 23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022
Y2 - 18 September 2022 through 22 September 2022
ER -