Emotion space for analysis and synthesis of facial expression

Shigeo Morishima, H. Harashima

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)

Abstract

This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.

Original languageEnglish
Title of host publicationProceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages188-193
Number of pages6
ISBN (Electronic)0780314077, 9780780314078
DOIs
Publication statusPublished - 1993 Jan 1
Externally publishedYes
Event2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993 - Tokyo, Japan
Duration: 1993 Nov 31993 Nov 5

Publication series

NameProceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993

Conference

Conference2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993
CountryJapan
CityTokyo
Period93/11/393/11/5

Fingerprint

facial expression
emotion
Computer terminals
Communication systems
Neural networks
Communication
communication system
neural network
coding

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Communication

Cite this

Morishima, S., & Harashima, H. (1993). Emotion space for analysis and synthesis of facial expression. In Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993 (pp. 188-193). [367724] (Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ROMAN.1993.367724

Emotion space for analysis and synthesis of facial expression. / Morishima, Shigeo; Harashima, H.

Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993. Institute of Electrical and Electronics Engineers Inc., 1993. p. 188-193 367724 (Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Morishima, S & Harashima, H 1993, Emotion space for analysis and synthesis of facial expression. in Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993., 367724, Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993, Institute of Electrical and Electronics Engineers Inc., pp. 188-193, 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993, Tokyo, Japan, 93/11/3. https://doi.org/10.1109/ROMAN.1993.367724
Morishima S, Harashima H. Emotion space for analysis and synthesis of facial expression. In Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993. Institute of Electrical and Electronics Engineers Inc. 1993. p. 188-193. 367724. (Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993). https://doi.org/10.1109/ROMAN.1993.367724
Morishima, Shigeo ; Harashima, H. / Emotion space for analysis and synthesis of facial expression. Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993. Institute of Electrical and Electronics Engineers Inc., 1993. pp. 188-193 (Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993).
@inproceedings{d983324ae11e4dd891da0e05cf5b6e0d,
title = "Emotion space for analysis and synthesis of facial expression",
abstract = "This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.",
author = "Shigeo Morishima and H. Harashima",
year = "1993",
month = "1",
day = "1",
doi = "10.1109/ROMAN.1993.367724",
language = "English",
series = "Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "188--193",
booktitle = "Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993",

}

TY - GEN

T1 - Emotion space for analysis and synthesis of facial expression

AU - Morishima, Shigeo

AU - Harashima, H.

PY - 1993/1/1

Y1 - 1993/1/1

N2 - This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.

AB - This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.

UR - http://www.scopus.com/inward/record.url?scp=84855934190&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84855934190&partnerID=8YFLogxK

U2 - 10.1109/ROMAN.1993.367724

DO - 10.1109/ROMAN.1993.367724

M3 - Conference contribution

AN - SCOPUS:84855934190

T3 - Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993

SP - 188

EP - 193

BT - Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993

PB - Institute of Electrical and Electronics Engineers Inc.

ER -