Abstract
A multimodal interface provides multiple modalities for input and output, such as speech, eye gaze and facial expression. With the recent progresses in multimodal interfaces, various approaches about multimodal input fusion and output generation have been proposed. However, less attention has been paid to how to integrate them together in a multimodal input and output system. This paper proposes an approach, termed as THE HINGE, in providing agent-based multimodal presentations in accordance with multimodal input fusion results. The analysis of experiment result shows the proposed approach enhances the flexibility of the system while maintains its stability.
Original language | English |
---|---|
Title of host publication | Conference on Human Factors in Computing Systems - Proceedings |
Pages | 3483-3488 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 2008 |
Externally published | Yes |
Event | 28th Annual CHI Conference on Human Factors in Computing Systems - Florence Duration: 2008 Apr 5 → 2008 Apr 10 |
Other
Other | 28th Annual CHI Conference on Human Factors in Computing Systems |
---|---|
City | Florence |
Period | 08/4/5 → 08/4/10 |
Keywords
- Discourse representation
- Input understanding
- Multimodal input fusion
- Multimodal interfaces
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Graphics and Computer-Aided Design
- Software