Speech spectrum transformation by speaker interpolation

Naoto Ituahashi, Yoshinori Sagisaka

Research output: Contribution to journalConference article

14 Citations (Scopus)

Abstract

In this paper, we propose a speech spectrum transformation method by interpolating spectral patterns between pre-stored multiple speakers for speech synthesis. Tlie interpolation is carried out using spectral parameters such as cepstrum and log area ratio to generate new spectrum patterns. The spectral patterns can be transforined smoothly as tlie iiiterpolation ratio is gradually changed, aid speech iiidividualitg caii easily be controlled between interpolated speakers. Adaptation to a target speaker can be peilornied by this interpolatiou, which uses only a small amount of training data to generate a new speech spectrum sequence close to the target speaker's. An adaptation experiment was carried out in the case of using only one word spoken by the target. speaker for learning. It was shown that the distance between the target speaker's spect.rnm and the spectrum generated by tlie proposed iuterpolation method is reduced by about 40% compared with distance between tlie target speaker's spectrum and spectrum of tlie speaker closest to the target ainoiig pre-stored ones.

Original languageEnglish
Article number389256
Pages (from-to)I461-I464
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume1
DOIs
Publication statusPublished - 1994
Externally publishedYes
EventProceedings of the 1994 IEEE International Conference on Acoustics, Speech and Signal Processing. Part 2 (of 6) - Adelaide, Aust
Duration: 1994 Apr 191994 Apr 22

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Speech spectrum transformation by speaker interpolation'. Together they form a unique fingerprint.

  • Cite this