Real-time facial expression detection based on frequency domain transform

Kazuyuki Ebihara, Jun Ohya, Fumio Kishino

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

A new method for the real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that are pasted to the face for detecting expressions in real-time in the current implementation for virtual space teleconferencing. In the proposed method, four windows are applied to four areas in the face image: a left and right eye, mouth, and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical, and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a genetic algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Experimental results show the effectiveness of the proposed method.

Original languageEnglish
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
Pages916-926
Number of pages11
Volume2727
Edition2/-
Publication statusPublished - 1996
Externally publishedYes
EventVisual Communications and Image Processing'96. Part 2 (of 3) - Orlando, FL, USA
Duration: 1996 Mar 171996 Mar 20

Other

OtherVisual Communications and Image Processing'96. Part 2 (of 3)
CityOrlando, FL, USA
Period96/3/1796/3/20

Fingerprint

discrete cosine transform
Discrete cosine transforms
Tapes
tapes
forehead
teleconferencing
Teleconferencing
mouth
genetic algorithms
polynomials
Genetic algorithms
Pixels
pixels
Polynomials
coefficients
energy

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Condensed Matter Physics

Cite this

Ebihara, K., Ohya, J., & Kishino, F. (1996). Real-time facial expression detection based on frequency domain transform. In Proceedings of SPIE - The International Society for Optical Engineering (2/- ed., Vol. 2727 , pp. 916-926)

Real-time facial expression detection based on frequency domain transform. / Ebihara, Kazuyuki; Ohya, Jun; Kishino, Fumio.

Proceedings of SPIE - The International Society for Optical Engineering. Vol. 2727 2/-. ed. 1996. p. 916-926.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ebihara, K, Ohya, J & Kishino, F 1996, Real-time facial expression detection based on frequency domain transform. in Proceedings of SPIE - The International Society for Optical Engineering. 2/- edn, vol. 2727 , pp. 916-926, Visual Communications and Image Processing'96. Part 2 (of 3), Orlando, FL, USA, 96/3/17.
Ebihara K, Ohya J, Kishino F. Real-time facial expression detection based on frequency domain transform. In Proceedings of SPIE - The International Society for Optical Engineering. 2/- ed. Vol. 2727 . 1996. p. 916-926
Ebihara, Kazuyuki ; Ohya, Jun ; Kishino, Fumio. / Real-time facial expression detection based on frequency domain transform. Proceedings of SPIE - The International Society for Optical Engineering. Vol. 2727 2/-. ed. 1996. pp. 916-926
@inproceedings{8e66bbb618564732afc0cc40adc82dfd,
title = "Real-time facial expression detection based on frequency domain transform",
abstract = "A new method for the real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that are pasted to the face for detecting expressions in real-time in the current implementation for virtual space teleconferencing. In the proposed method, four windows are applied to four areas in the face image: a left and right eye, mouth, and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical, and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a genetic algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Experimental results show the effectiveness of the proposed method.",
author = "Kazuyuki Ebihara and Jun Ohya and Fumio Kishino",
year = "1996",
language = "English",
isbn = "0819421030",
volume = "2727",
pages = "916--926",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",
edition = "2/-",

}

TY - GEN

T1 - Real-time facial expression detection based on frequency domain transform

AU - Ebihara, Kazuyuki

AU - Ohya, Jun

AU - Kishino, Fumio

PY - 1996

Y1 - 1996

N2 - A new method for the real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that are pasted to the face for detecting expressions in real-time in the current implementation for virtual space teleconferencing. In the proposed method, four windows are applied to four areas in the face image: a left and right eye, mouth, and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical, and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a genetic algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Experimental results show the effectiveness of the proposed method.

AB - A new method for the real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that are pasted to the face for detecting expressions in real-time in the current implementation for virtual space teleconferencing. In the proposed method, four windows are applied to four areas in the face image: a left and right eye, mouth, and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical, and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a genetic algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Experimental results show the effectiveness of the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=0030392153&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0030392153&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0030392153

SN - 0819421030

SN - 9780819421036

VL - 2727

SP - 916

EP - 926

BT - Proceedings of SPIE - The International Society for Optical Engineering

ER -