Automatic generation of multi-modal dialogue from text based on discourse structure analysis

Helmut Prendinger*, Paul Piwek, Mitsuru Ishizuka

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

In this paper, we propose a novel method for generating engaging multi-modal content automatically from text. Rhetorical Structure Theory (RST) is used to decompose text into discourse units and to identify rhetorical discourse relations between them. Rhetorical relations are then mapped to question-answer pairs in an information preserving way, i.e., the original text and the resulting dialogue convey essentially the same meaning. Finally, the dialogue is "acted out" by two virtual agents. The network of dialogue structures automatically built up during this process, called DialogueNet, can be reused for other purposes, such as personalization or question-answering.

Original languageEnglish
Title of host publicationICSC 2007 International Conference on Semantic Computing
Pages27-34
Number of pages8
DOIs
Publication statusPublished - 2007
Externally publishedYes
EventICSC 2007 International Conference on Semantic Computing - Irvine CA
Duration: 2007 Sept 172007 Sept 19

Other

OtherICSC 2007 International Conference on Semantic Computing
CityIrvine CA
Period07/9/1707/9/19

ASJC Scopus subject areas

  • Computer Science(all)
  • Computer Science Applications

Cite this