Real-time sound source localization and separation based on active audio-visual integration

Hiroshi G. Okuno*, Kazuhiro Nakadai

*この研究の対応する著者

研究成果: Article査読

5 被引用数 (Scopus)

抄録

Robot audition in the real world should cope with environment noises and reverberation and motor noises caused by the robot's own movements. This paper presents the active direction-pass filter (ADPF) to separate sounds originating from the specified direction with a pair of microphones. The ADPF is implemented by hierarchical integration of visual and auditory processing with hypothetical reasoning on interaural phase difference (IPD) and interaural intensity difference (IID) for each subband. In creating hypotheses, the reference data of IPD and IID is calculated by the auditory epipolar geometry on demand. Since the performance of the ADPF depends on the direction, the ADPF controls the direction by motor movement. The human tracking and sound source separation based on the ADPF is implemented on an upper-torso humanoid and runs in real-time with 4 PCs connected over Gigabit ethernet. The signal-to-noise ratio (SNR) of each sound separated by the ADPF from a mixture of two speeches with the same loudness is improved to about 10 dB from 0 dB.

本文言語English
ページ(範囲)118-125
ページ数8
ジャーナルLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
2686
出版ステータスPublished - 2003
外部発表はい

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)
  • 生化学、遺伝学、分子生物学(全般)
  • 理論的コンピュータサイエンス

フィンガープリント

「Real-time sound source localization and separation based on active audio-visual integration」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル