Study of real time facial expression detection for virtual space teleconferencing

Kazuyuki Ebihara, Jun Ohya, Fumio Kishino

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.

Original languageEnglish
Title of host publicationRobot and Human Communication - Proceedings of the IEEE International Workshop
Pages247-251
Number of pages5
Publication statusPublished - 1995
Externally publishedYes
EventProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN - Tokyo, Jpn
Duration: 1995 Jul 51995 Jul 7

Other

OtherProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN
CityTokyo, Jpn
Period95/7/595/7/7

Fingerprint

Teleconferencing
Discrete cosine transforms
Tapes
Pixels
Hardware
Experiments

ASJC Scopus subject areas

  • Hardware and Architecture
  • Software

Cite this

Ebihara, K., Ohya, J., & Kishino, F. (1995). Study of real time facial expression detection for virtual space teleconferencing. In Robot and Human Communication - Proceedings of the IEEE International Workshop (pp. 247-251)

Study of real time facial expression detection for virtual space teleconferencing. / Ebihara, Kazuyuki; Ohya, Jun; Kishino, Fumio.

Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. p. 247-251.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ebihara, K, Ohya, J & Kishino, F 1995, Study of real time facial expression detection for virtual space teleconferencing. in Robot and Human Communication - Proceedings of the IEEE International Workshop. pp. 247-251, Proceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN, Tokyo, Jpn, 95/7/5.
Ebihara K, Ohya J, Kishino F. Study of real time facial expression detection for virtual space teleconferencing. In Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. p. 247-251
Ebihara, Kazuyuki ; Ohya, Jun ; Kishino, Fumio. / Study of real time facial expression detection for virtual space teleconferencing. Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. pp. 247-251
@inproceedings{4918617a2d584b618d1cbebac37fb6b4,
title = "Study of real time facial expression detection for virtual space teleconferencing",
abstract = "A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.",
author = "Kazuyuki Ebihara and Jun Ohya and Fumio Kishino",
year = "1995",
language = "English",
pages = "247--251",
booktitle = "Robot and Human Communication - Proceedings of the IEEE International Workshop",

}

TY - GEN

T1 - Study of real time facial expression detection for virtual space teleconferencing

AU - Ebihara, Kazuyuki

AU - Ohya, Jun

AU - Kishino, Fumio

PY - 1995

Y1 - 1995

N2 - A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.

AB - A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=0029526767&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029526767&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0029526767

SP - 247

EP - 251

BT - Robot and Human Communication - Proceedings of the IEEE International Workshop

ER -