Unified auditory functions based on Bayesian topic model

Takuma Otsuka, Katsuhiko Ishiguro, Hiroshi Sawada, Hiroshi G. Okuno

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Existing auditory functions for robots such as sound source localization and separation have been implemented in a cascaded framework whose overall performance may be degraded by any failure in its subsystems. These approaches often require a careful and environment-dependent tuning for each subsystems to achieve better performance. This paper presents a unified framework for sound source localization and separation where the whole system is integrated as a Bayesian topic model. This method improves both localization and separation with a common configuration under various environments by iterative inference using Gibbs sampling. Experimental results from three environments of different reverberation times confirm that our method outperforms state-of-the-art sound source separation methods, especially in the reverberant environments, and shows localization performance comparable to that of the existing robot audition system.

Original languageEnglish
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages2370-2376
Number of pages7
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012 - Vilamoura, Algarve
Duration: 2012 Oct 72012 Oct 12

Other

Other25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012
CityVilamoura, Algarve
Period12/10/712/10/12

Fingerprint

Acoustic waves
Robots
Source separation
Reverberation
Audition
Tuning
Sampling

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Otsuka, T., Ishiguro, K., Sawada, H., & Okuno, H. G. (2012). Unified auditory functions based on Bayesian topic model. In IEEE International Conference on Intelligent Robots and Systems (pp. 2370-2376). [6385787] https://doi.org/10.1109/IROS.2012.6385787

Unified auditory functions based on Bayesian topic model. / Otsuka, Takuma; Ishiguro, Katsuhiko; Sawada, Hiroshi; Okuno, Hiroshi G.

IEEE International Conference on Intelligent Robots and Systems. 2012. p. 2370-2376 6385787.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Otsuka, T, Ishiguro, K, Sawada, H & Okuno, HG 2012, Unified auditory functions based on Bayesian topic model. in IEEE International Conference on Intelligent Robots and Systems., 6385787, pp. 2370-2376, 25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012, Vilamoura, Algarve, 12/10/7. https://doi.org/10.1109/IROS.2012.6385787
Otsuka T, Ishiguro K, Sawada H, Okuno HG. Unified auditory functions based on Bayesian topic model. In IEEE International Conference on Intelligent Robots and Systems. 2012. p. 2370-2376. 6385787 https://doi.org/10.1109/IROS.2012.6385787
Otsuka, Takuma ; Ishiguro, Katsuhiko ; Sawada, Hiroshi ; Okuno, Hiroshi G. / Unified auditory functions based on Bayesian topic model. IEEE International Conference on Intelligent Robots and Systems. 2012. pp. 2370-2376
@inproceedings{09c9e34c43a74855b7d900983a2191c4,
title = "Unified auditory functions based on Bayesian topic model",
abstract = "Existing auditory functions for robots such as sound source localization and separation have been implemented in a cascaded framework whose overall performance may be degraded by any failure in its subsystems. These approaches often require a careful and environment-dependent tuning for each subsystems to achieve better performance. This paper presents a unified framework for sound source localization and separation where the whole system is integrated as a Bayesian topic model. This method improves both localization and separation with a common configuration under various environments by iterative inference using Gibbs sampling. Experimental results from three environments of different reverberation times confirm that our method outperforms state-of-the-art sound source separation methods, especially in the reverberant environments, and shows localization performance comparable to that of the existing robot audition system.",
author = "Takuma Otsuka and Katsuhiko Ishiguro and Hiroshi Sawada and Okuno, {Hiroshi G.}",
year = "2012",
doi = "10.1109/IROS.2012.6385787",
language = "English",
isbn = "9781467317375",
pages = "2370--2376",
booktitle = "IEEE International Conference on Intelligent Robots and Systems",

}

TY - GEN

T1 - Unified auditory functions based on Bayesian topic model

AU - Otsuka, Takuma

AU - Ishiguro, Katsuhiko

AU - Sawada, Hiroshi

AU - Okuno, Hiroshi G.

PY - 2012

Y1 - 2012

N2 - Existing auditory functions for robots such as sound source localization and separation have been implemented in a cascaded framework whose overall performance may be degraded by any failure in its subsystems. These approaches often require a careful and environment-dependent tuning for each subsystems to achieve better performance. This paper presents a unified framework for sound source localization and separation where the whole system is integrated as a Bayesian topic model. This method improves both localization and separation with a common configuration under various environments by iterative inference using Gibbs sampling. Experimental results from three environments of different reverberation times confirm that our method outperforms state-of-the-art sound source separation methods, especially in the reverberant environments, and shows localization performance comparable to that of the existing robot audition system.

AB - Existing auditory functions for robots such as sound source localization and separation have been implemented in a cascaded framework whose overall performance may be degraded by any failure in its subsystems. These approaches often require a careful and environment-dependent tuning for each subsystems to achieve better performance. This paper presents a unified framework for sound source localization and separation where the whole system is integrated as a Bayesian topic model. This method improves both localization and separation with a common configuration under various environments by iterative inference using Gibbs sampling. Experimental results from three environments of different reverberation times confirm that our method outperforms state-of-the-art sound source separation methods, especially in the reverberant environments, and shows localization performance comparable to that of the existing robot audition system.

UR - http://www.scopus.com/inward/record.url?scp=84872317583&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84872317583&partnerID=8YFLogxK

U2 - 10.1109/IROS.2012.6385787

DO - 10.1109/IROS.2012.6385787

M3 - Conference contribution

SN - 9781467317375

SP - 2370

EP - 2376

BT - IEEE International Conference on Intelligent Robots and Systems

ER -