Real-Time multiple speaker tracking by multi-modal integration for mobile robots

Kazuhiro Nakadai, Ken Ichi Hidai, Hiroshi G. Okuno, Hiroaki Kitano

研究成果: Conference contribution

15 被引用数 (Scopus)

抄録

In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.

本文言語English
ホスト出版物のタイトルEUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology
出版社International Speech Communication Association
ページ1193-1196
ページ数4
ISBN(電子版)8790834100, 9788790834104
出版ステータスPublished - 2001
外部発表はい
イベント7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001 - Aalborg, Denmark
継続期間: 2001 9 32001 9 7

Other

Other7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001
国/地域Denmark
CityAalborg
Period01/9/301/9/7

ASJC Scopus subject areas

  • 通信
  • 言語学および言語
  • コンピュータ サイエンスの応用
  • ソフトウェア

フィンガープリント

「Real-Time multiple speaker tracking by multi-modal integration for mobile robots」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル