Computational model of sound stream segregation with multi-agent paradigm

Tomohiro Nakatani, Takeshi Kawabata, Hiroshi G. Okuno

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

This paper presents a new computation model for sound stream segregation based on a multi-agent paradigm. Sound streams are thought to play a key role in auditory scene analysis, which provides a general framework for auditory research including voiced speech and music. Each agent is dynamically allocated to a sound stream, and it segregates the stream by focusing on consistent attributes. Agents interact with each other to resolve stream interference. In this paper, we design agents to segregate harmonic streams and a noise stream. The presented system can segregate all the streams from a mixture of a male and a female voiced speech and a background non-harmonic noise.

Original languageEnglish
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages2671-2674
Number of pages4
Volume4
Publication statusPublished - 1995
Externally publishedYes
EventProceedings of the 1995 20th International Conference on Acoustics, Speech, and Signal Processing. Part 2 (of 5) - Detroit, MI, USA
Duration: 1995 May 91995 May 12

Other

OtherProceedings of the 1995 20th International Conference on Acoustics, Speech, and Signal Processing. Part 2 (of 5)
CityDetroit, MI, USA
Period95/5/995/5/12

    Fingerprint

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Acoustics and Ultrasonics

Cite this

Nakatani, T., Kawabata, T., & Okuno, H. G. (1995). Computational model of sound stream segregation with multi-agent paradigm. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (Vol. 4, pp. 2671-2674)