Study of real time facial expression detection for virtual space teleconferencing

Kazuyuki Ebihara*, Jun Ohya, Fumio Kishino

*この研究の対応する著者

研究成果: Paper査読

5 被引用数 (Scopus)

抄録

A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.

本文言語English
ページ247-251
ページ数5
出版ステータスPublished - 1995 12 1
外部発表はい
イベントProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN - Tokyo, Jpn
継続期間: 1995 7 51995 7 7

Other

OtherProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN
CityTokyo, Jpn
Period95/7/595/7/7

ASJC Scopus subject areas

  • ハードウェアとアーキテクチャ
  • ソフトウェア

フィンガープリント

「Study of real time facial expression detection for virtual space teleconferencing」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル