Simultaneous speech recognition and speaker diarization for monaural dialogue recordings with target-speaker acoustic models

Naoyuki Kanda, Shota Horiguchi, Yusuke Fujita, Yawen Xue, Kenji Nagamatsu, Shinji Watanabe

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates the use of target-speaker automatic speech recognition (TS-ASR) for simultaneous speech recognition and speaker diarization of single-channel dialogue recordings. TS-ASR is a technique to automatically extract and recognize only the speech of a target speaker given a short sample utterance of that speaker. One obvious drawback of TS-ASR is that it cannot be used when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding. To remove this limitation, we propose an iterative method, in which (i) the estimation of speaker embeddings and (ii) TS-ASR based on the estimated speaker embeddings are alternately executed. We evaluated the proposed method by using very challenging dialogue recordings in which the speaker overlap ratio was over 20%. We confirmed that the proposed method significantly reduced both the word error rate (WER) and diarization error rate (DER). Our proposed method combined with i-vector speaker embeddings ultimately achieved a WER that differed by only 2.1 % from that of TS-ASR given oracle speaker embeddings. Furthermore, our method can solve speaker diarization simultaneously as a by-product and achieved better DER than that of the conventional clustering-based speaker diarization method based on i-vector.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Sep 17
Externally publishedYes

Keywords

  • Deep learning
  • Multi-talker speech recognition
  • Speaker diarization

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Simultaneous speech recognition and speaker diarization for monaural dialogue recordings with target-speaker acoustic models'. Together they form a unique fingerprint.

Cite this