Real-Time multiple speaker tracking by multi-modal integration for mobile robots

Kazuhiro Nakadai, Ken Ichi Hidai, Hiroshi G. Okuno, Hiroaki Kitano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.

Original languageEnglish
Title of host publicationEUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology
PublisherInternational Speech Communication Association
Pages1193-1196
Number of pages4
ISBN (Electronic)8790834100, 9788790834104
Publication statusPublished - 2001
Externally publishedYes
Event7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001 - Aalborg, Denmark
Duration: 2001 Sep 32001 Sep 7

Other

Other7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001
CountryDenmark
CityAalborg
Period01/9/301/9/7

Fingerprint

robot
Mobile robots
Audition
Processing
Acoustic noise
Acoustic waves
Robots
PC
human being
time
interaction

ASJC Scopus subject areas

  • Communication
  • Linguistics and Language
  • Computer Science Applications
  • Software

Cite this

Nakadai, K., Hidai, K. I., Okuno, H. G., & Kitano, H. (2001). Real-Time multiple speaker tracking by multi-modal integration for mobile robots. In EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology (pp. 1193-1196). International Speech Communication Association.

Real-Time multiple speaker tracking by multi-modal integration for mobile robots. / Nakadai, Kazuhiro; Hidai, Ken Ichi; Okuno, Hiroshi G.; Kitano, Hiroaki.

EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology. International Speech Communication Association, 2001. p. 1193-1196.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nakadai, K, Hidai, KI, Okuno, HG & Kitano, H 2001, Real-Time multiple speaker tracking by multi-modal integration for mobile robots. in EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology. International Speech Communication Association, pp. 1193-1196, 7th European Conference on Speech Communication and Technology - Scandinavia, EUROSPEECH 2001, Aalborg, Denmark, 01/9/3.
Nakadai K, Hidai KI, Okuno HG, Kitano H. Real-Time multiple speaker tracking by multi-modal integration for mobile robots. In EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology. International Speech Communication Association. 2001. p. 1193-1196
Nakadai, Kazuhiro ; Hidai, Ken Ichi ; Okuno, Hiroshi G. ; Kitano, Hiroaki. / Real-Time multiple speaker tracking by multi-modal integration for mobile robots. EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology. International Speech Communication Association, 2001. pp. 1193-1196
@inproceedings{648264a634774c7686b40779d328f2a6,
title = "Real-Time multiple speaker tracking by multi-modal integration for mobile robots",
abstract = "In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.",
author = "Kazuhiro Nakadai and Hidai, {Ken Ichi} and Okuno, {Hiroshi G.} and Hiroaki Kitano",
year = "2001",
language = "English",
pages = "1193--1196",
booktitle = "EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology",
publisher = "International Speech Communication Association",

}

TY - GEN

T1 - Real-Time multiple speaker tracking by multi-modal integration for mobile robots

AU - Nakadai, Kazuhiro

AU - Hidai, Ken Ichi

AU - Okuno, Hiroshi G.

AU - Kitano, Hiroaki

PY - 2001

Y1 - 2001

N2 - In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.

AB - In this paper, real-Time multiple speaker tracking is addressed, because it is essential in robot perception and humanrobot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-Time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-Time processing. As a result, we attain robust real-Time speaker tracking with 200 ms delay in a non-Anechoic room, even when multiple speakers exist and the tracking person is visually occluded.

UR - http://www.scopus.com/inward/record.url?scp=85009104917&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85009104917&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85009104917

SP - 1193

EP - 1196

BT - EUROSPEECH 2001 - SCANDINAVIA - 7th European Conference on Speech Communication and Technology

PB - International Speech Communication Association

ER -