Cocktail-party effect with Computational Auditory Scene Analysis - Preliminary report

Hiroshi G. Okuno, Tomohiro Nakatani, Takeshi Kawabata

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

One of important and interesting phenomena in sophisticated human communications is the cocktail party effect: that even at a crowded party, one can attend one conversation and then switch to another one. To model it in a computer implementation, we need a mechanism for understanding general sounds, and Computational Auditory Scene Analysis (CASA) is a novel framework for manipulating sounds. We use it to model the cocktail party effect as follows: sound streams are first extracted from a mixture of sounds, and then some sound stream is selected by focusing attention on it. Because sound stream segregation is an essential primary processing for the cocktail party effect, in this paper, we present a multi-agent approach for sound stream segregation. The resulting system can segregate a man's voice stream, a woman's voice stream, and a noise stream from a mixture of these sounds.

Original languageEnglish
Pages (from-to)503-508
Number of pages6
JournalAdvances in Human Factors/Ergonomics
Volume20
Issue numberB
DOIs
Publication statusPublished - 1995
Externally publishedYes

ASJC Scopus subject areas

  • Social Sciences (miscellaneous)
  • Human Factors and Ergonomics

Fingerprint

Dive into the research topics of 'Cocktail-party effect with Computational Auditory Scene Analysis - Preliminary report'. Together they form a unique fingerprint.

Cite this