Facial motion synthesis for intelligent man-machine interface

Shigeo Morishima, Shin'ichi Okada, Hiroshi Harashima

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

A facial motion image synthesis method for intelligent man-machine interface is examined. Here, the intelligent man-machine interface is a kind of friendly man-machine interface with voices and pictures in which human faces appear on a screen and answer questions, compared to the currently existing user interfaces which primarily uses letters. Thus what appears on the screen is human faces, and if speech mannerisms and facial expressions are natural, then the interactions with the machine are similar to those with actual human beings. To implement such an intelligent man-machine interface it is necessary to synthesize natural facial expressions on the screen. This paper investigates a method to synthesize facial motion images based on given information on text and emotion. The proposed method utilizes the analysis-synthesis image coding method. It constructs facial images by assigning intensity data to the parameters of a 3-dimensional (3-D) model matching the person in question. Moreover, it synthesizes facial expressions by modifying the 3-D model according to the predetermined set of rules based on the input phonemes and emotion, and also synthesizes reasonably natural facial images.

Original languageEnglish
Pages (from-to)50-59
Number of pages10
JournalSystems and Computers in Japan
Volume22
Issue number5
Publication statusPublished - 1991
Externally publishedYes

Fingerprint

Man-machine Interface
Facial Expression
Synthesis
Motion
Image coding
3D Model
User interfaces
Face
Model Matching
Image Coding
User Interface
Person
Necessary
Interaction
Human
Emotion

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Hardware and Architecture
  • Information Systems
  • Theoretical Computer Science

Cite this

Facial motion synthesis for intelligent man-machine interface. / Morishima, Shigeo; Okada, Shin'ichi; Harashima, Hiroshi.

In: Systems and Computers in Japan, Vol. 22, No. 5, 1991, p. 50-59.

Research output: Contribution to journalArticle

Morishima, Shigeo ; Okada, Shin'ichi ; Harashima, Hiroshi. / Facial motion synthesis for intelligent man-machine interface. In: Systems and Computers in Japan. 1991 ; Vol. 22, No. 5. pp. 50-59.
@article{4e3a50c5ec4d45e7b23de63c25a6e360,
title = "Facial motion synthesis for intelligent man-machine interface",
abstract = "A facial motion image synthesis method for intelligent man-machine interface is examined. Here, the intelligent man-machine interface is a kind of friendly man-machine interface with voices and pictures in which human faces appear on a screen and answer questions, compared to the currently existing user interfaces which primarily uses letters. Thus what appears on the screen is human faces, and if speech mannerisms and facial expressions are natural, then the interactions with the machine are similar to those with actual human beings. To implement such an intelligent man-machine interface it is necessary to synthesize natural facial expressions on the screen. This paper investigates a method to synthesize facial motion images based on given information on text and emotion. The proposed method utilizes the analysis-synthesis image coding method. It constructs facial images by assigning intensity data to the parameters of a 3-dimensional (3-D) model matching the person in question. Moreover, it synthesizes facial expressions by modifying the 3-D model according to the predetermined set of rules based on the input phonemes and emotion, and also synthesizes reasonably natural facial images.",
author = "Shigeo Morishima and Shin'ichi Okada and Hiroshi Harashima",
year = "1991",
language = "English",
volume = "22",
pages = "50--59",
journal = "Systems and Computers in Japan",
issn = "0882-1666",
publisher = "John Wiley and Sons Inc.",
number = "5",

}

TY - JOUR

T1 - Facial motion synthesis for intelligent man-machine interface

AU - Morishima, Shigeo

AU - Okada, Shin'ichi

AU - Harashima, Hiroshi

PY - 1991

Y1 - 1991

N2 - A facial motion image synthesis method for intelligent man-machine interface is examined. Here, the intelligent man-machine interface is a kind of friendly man-machine interface with voices and pictures in which human faces appear on a screen and answer questions, compared to the currently existing user interfaces which primarily uses letters. Thus what appears on the screen is human faces, and if speech mannerisms and facial expressions are natural, then the interactions with the machine are similar to those with actual human beings. To implement such an intelligent man-machine interface it is necessary to synthesize natural facial expressions on the screen. This paper investigates a method to synthesize facial motion images based on given information on text and emotion. The proposed method utilizes the analysis-synthesis image coding method. It constructs facial images by assigning intensity data to the parameters of a 3-dimensional (3-D) model matching the person in question. Moreover, it synthesizes facial expressions by modifying the 3-D model according to the predetermined set of rules based on the input phonemes and emotion, and also synthesizes reasonably natural facial images.

AB - A facial motion image synthesis method for intelligent man-machine interface is examined. Here, the intelligent man-machine interface is a kind of friendly man-machine interface with voices and pictures in which human faces appear on a screen and answer questions, compared to the currently existing user interfaces which primarily uses letters. Thus what appears on the screen is human faces, and if speech mannerisms and facial expressions are natural, then the interactions with the machine are similar to those with actual human beings. To implement such an intelligent man-machine interface it is necessary to synthesize natural facial expressions on the screen. This paper investigates a method to synthesize facial motion images based on given information on text and emotion. The proposed method utilizes the analysis-synthesis image coding method. It constructs facial images by assigning intensity data to the parameters of a 3-dimensional (3-D) model matching the person in question. Moreover, it synthesizes facial expressions by modifying the 3-D model according to the predetermined set of rules based on the input phonemes and emotion, and also synthesizes reasonably natural facial images.

UR - http://www.scopus.com/inward/record.url?scp=0025745916&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0025745916&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0025745916

VL - 22

SP - 50

EP - 59

JO - Systems and Computers in Japan

JF - Systems and Computers in Japan

SN - 0882-1666

IS - 5

ER -