Statistical voice conversion based on noisy channel model

Daisuke Saito*, Shinji Watanabe, Atsushi Nakamura, Nobuaki Minematsu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)

Abstract

This paper describes a novel framework of voice conversion effectively using both a joint density model and a speaker model. In voice conversion studies, approaches based on the Gaussian mixture model (GMM) with probabilistic densities of joint vectors of a source and a target speakers are widely used to estimate a transform function between both the speakers. However, to achieve sufficient quality, these approaches require a parallel corpus which contains plenty of utterances with the same linguistic content spoken by both the speakers. In addition, the joint density GMM methods often suffer from overtraining effects when the amount of training data is small. To compensate for these problems, we propose a voice conversion framework, which integrates the speaker GMM of the target with the joint density model using a noisy channel model. The proposed method trains the joint density model with a few parallel utterances, and the speaker model with nonparallel data of the target, independently. It can ease the burden on the source speaker. Experiments demonstrate the effectiveness of the proposed method, especially when the amount of the parallel corpus is small.

Original languageEnglish
Article number6156420
Pages (from-to)1784-1794
Number of pages11
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume20
Issue number6
DOIs
Publication statusPublished - 2012
Externally publishedYes

Keywords

  • Joint density model
  • noisy channel model
  • probabilistic integration
  • speaker model
  • voice conversion (VC)

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Statistical voice conversion based on noisy channel model'. Together they form a unique fingerprint.

Cite this