Understanding three simultaneous speeches

Hiroshi G. Okuno, Tomohiro Nakatani, Takeshi Kawabata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Understanding three simultaneous speeches is proposed as a challenge problem to foster artificial intelligence, speech and sound understanding or recognition, and computational auditory scene analysis research. Automatic speech recognition under noisy environments is attacked by speech enhancement techniques such as noise reduction and speaker adaptation. However, the signal-to-noise ratio of speech in two simultaneous speeches is too poor to apply these techniques. Therefore, novel techniques need to be developed. One candidate is to use speech stream segregation as a front-end of automatic speech recognition systems. Preliminary experiments on understanding two simultaneous speeches show that the proposed challenge problem will be feasible with speech stream segregation. The detailed plan of the research on and benchmark sounds for the proposed challenge problem is also presented.

Original languageEnglish
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
Pages30-35
Number of pages6
Volume1
Publication statusPublished - 1997
Externally publishedYes
Event15th International Joint Conference on Artificial Intelligence, IJCAI 1997 - Nagoya, Aichi, Japan
Duration: 1997 Aug 231997 Aug 29

Other

Other15th International Joint Conference on Artificial Intelligence, IJCAI 1997
CountryJapan
CityNagoya, Aichi
Period97/8/2397/8/29

    Fingerprint

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Okuno, H. G., Nakatani, T., & Kawabata, T. (1997). Understanding three simultaneous speeches. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 1, pp. 30-35)