Improving RNN transducer with target speaker extraction and neural uncertainty estimation

Jiatong Shi*, Chunlei Zhang, Chao Weng, Shinji Watanabe, Meng Yu, Dong Yu

*この研究の対応する著者

研究成果: Conference article査読

抄録

Target-speaker speech recognition aims to recognize target-speaker speech from noisy environments with background noise and interfering speakers. This work presents a joint framework that combines time-domain target-speaker speech extraction and Recurrent Neural Network Transducer (RNN-T). To stabilize the joint-training, we propose a multi-stage training strategy that pre-trains and fine-tunes each module in the system before joint-training. Meanwhile, speaker identity and speech enhancement uncertainty measures are proposed to compensate for residual noise and artifacts from the target speech extraction module. Compared to a recognizer fine-tuned with a target speech extraction model, our experiments show that adding the neural uncertainty module significantly reduces 17% relative Character Error Rate (CER) on multi-speaker signals with background noise. The multi-condition experiments indicate that our method can achieve 9% relative performance gain in the noisy condition while maintaining the performance in the clean condition.

本文言語English
ページ(範囲)6908-6912
ページ数5
ジャーナルICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2021-June
DOI
出版ステータスPublished - 2021
外部発表はい
イベント2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
継続期間: 2021 6 62021 6 11

ASJC Scopus subject areas

  • ソフトウェア
  • 信号処理
  • 電子工学および電気工学

フィンガープリント

「Improving RNN transducer with target speaker extraction and neural uncertainty estimation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル