Abstract
Understanding three simultaneous speeches is proposed as a challenge problem to foster artificial intelligence, speech and sound understanding or recognition, and computational auditory scene analysis research. Automatic speech recognition under noisy environments is attacked by speech enhancement techniques such as noise reduction and speaker adaptation. However, the signal-to-noise ratio of speech in two simultaneous speeches is too poor to apply these techniques. Therefore, novel techniques need to be developed. One candidate is to use speech stream segregation as a front-end of automatic speech recognition systems. Preliminary experiments on understanding two simultaneous speeches show that the proposed challenge problem will be feasible with speech stream segregation. The detailed plan of the research on and benchmark sounds for the proposed challenge problem is also presented.
Original language | English |
---|---|
Title of host publication | IJCAI International Joint Conference on Artificial Intelligence |
Pages | 30-35 |
Number of pages | 6 |
Volume | 1 |
Publication status | Published - 1997 |
Externally published | Yes |
Event | 15th International Joint Conference on Artificial Intelligence, IJCAI 1997 - Nagoya, Aichi, Japan Duration: 1997 Aug 23 → 1997 Aug 29 |
Other
Other | 15th International Joint Conference on Artificial Intelligence, IJCAI 1997 |
---|---|
Country | Japan |
City | Nagoya, Aichi |
Period | 97/8/23 → 97/8/29 |
ASJC Scopus subject areas
- Artificial Intelligence