抄録
In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.
本文言語 | English |
---|---|
ホスト出版物のタイトル | EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology |
出版社 | International Speech Communication Association |
ページ | 1193-1196 |
ページ数 | 4 |
ISBN(電子版) | 8790834100, 9788790834104 |
出版ステータス | Published - 2001 |
外部発表 | はい |
イベント | 7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001 - Aalborg, Denmark 継続期間: 2001 9月 3 → 2001 9月 7 |
Other
Other | 7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001 |
---|---|
国/地域 | Denmark |
City | Aalborg |
Period | 01/9/3 → 01/9/7 |
ASJC Scopus subject areas
- 通信
- 言語学および言語
- コンピュータ サイエンスの応用
- ソフトウェア