RWC multimodal database for interactions by integration of spoken language and visual information

S. Hayamizu*, O. Hasegawa, K. Itou, K. Sakaue, K. Tanaka, S. Nagaya, M. Nakazawa, T. Endoh, F. Togawa, K. Sakamoto, K. Yamamoto

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

7 Citations (Scopus)

Abstract

This paper describes our design policy and prototype data collection of RWC (Real World Computing Program) multimodal database. The database is intended for research and development on the integration of spoken language and visual information for human computer interactions. The interactions are supposed to use image recognition, image synthesis, speech recognition, and speech synthesis. Visual information also includes non-verbal communication such as interactions using hand gestures and facial expressions between human and a human-like CG (Computer Graphics) agent with a face and hands. Based on the experiments of interactions with these modes, specifications of the database are discussed from the viewpoint of controlling the variability and cost for the collection.

Original languageEnglish
Pages2171-2174
Number of pages4
Publication statusPublished - 1996
Externally publishedYes
EventProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4) - Philadelphia, PA, USA
Duration: 1996 Oct 31996 Oct 6

Other

OtherProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4)
CityPhiladelphia, PA, USA
Period96/10/396/10/6

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'RWC multimodal database for interactions by integration of spoken language and visual information'. Together they form a unique fingerprint.

Cite this