Abstract
We present a method of improving sound source separation using vision. The sound source separation is an essential function to accomplish auditory scene understanding by separating stream of sounds generated from multiple sound sources. By separating a stream of sounds, recognition process, such as speech recognition, can simply work on a single stream, not mixed sound of several speakers. The performance is known to be improved by using stereo/binaural microphone and microphone array which provides spatial information for separation. However, these methods still have more than 20 degree of positional ambiguities. In this paper, we further added visual information to provide more specific and accurate position information. As a result, separation capability was drastically improved. In addition, we found that the use of approximate direction information drastically improve object tracking accuracy of a simple vision system, which in turn improves performance of the auditory system. We claim that the integration of vision and auditory inputs improves performance of tasks in each perception, such as sound source separation and object tracking, by bootstrapping.
Original language | English |
---|---|
Title of host publication | Proceedings of the National Conference on Artificial Intelligence |
Place of Publication | Menlo Park, CA, United States |
Publisher | AAAI |
Pages | 768-775 |
Number of pages | 8 |
ISBN (Print) | 0262511061 |
Publication status | Published - 1999 |
Externally published | Yes |
Event | Proceedings of the 1999 16th National Conference on Artificial Intelligence (AAAI-99), 11th Innovative Applications of Artificial Intelligence Conference (IAAI-99) - Orlando, FL, USA Duration: 1999 Jul 18 → 1999 Jul 22 |
Other
Other | Proceedings of the 1999 16th National Conference on Artificial Intelligence (AAAI-99), 11th Innovative Applications of Artificial Intelligence Conference (IAAI-99) |
---|---|
City | Orlando, FL, USA |
Period | 99/7/18 → 99/7/22 |
ASJC Scopus subject areas
- Software