Object recognition with luminance, rotation and location invariance

Takami Satonaka, Takaaki Baba, Tatsuo Otsuki, Takao Chikamura, Teresa H. Meng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

In this paper we propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using two-dimensional DCT is improved from 13.6% to 2.4% with the aid of the proposed image synthesis procedure. The 2.4% error rate is better than all previously reported results using Karhunen-Loeve transform convolution networks and eigenface models. In using DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.

Original languageEnglish
Title of host publicationIEEE International Conference on Image Processing
Place of PublicationLos Alamitos, CA, United States
PublisherIEEE Comp Soc
Pages336-339
Number of pages4
Volume3
Publication statusPublished - 1997
Externally publishedYes
EventProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3) - Santa Barbara, CA, USA
Duration: 1997 Oct 261997 Oct 29

Other

OtherProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3)
CitySanta Barbara, CA, USA
Period97/10/2697/10/29

Fingerprint

Object recognition
Invariance
Luminance
Face recognition
Convolution
Computational complexity
Neural networks
Data storage equipment

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this

Satonaka, T., Baba, T., Otsuki, T., Chikamura, T., & Meng, T. H. (1997). Object recognition with luminance, rotation and location invariance. In IEEE International Conference on Image Processing (Vol. 3, pp. 336-339). Los Alamitos, CA, United States: IEEE Comp Soc.

Object recognition with luminance, rotation and location invariance. / Satonaka, Takami; Baba, Takaaki; Otsuki, Tatsuo; Chikamura, Takao; Meng, Teresa H.

IEEE International Conference on Image Processing. Vol. 3 Los Alamitos, CA, United States : IEEE Comp Soc, 1997. p. 336-339.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Satonaka, T, Baba, T, Otsuki, T, Chikamura, T & Meng, TH 1997, Object recognition with luminance, rotation and location invariance. in IEEE International Conference on Image Processing. vol. 3, IEEE Comp Soc, Los Alamitos, CA, United States, pp. 336-339, Proceedings of the 1997 International Conference on Image Processing. Part 2 (of 3), Santa Barbara, CA, USA, 97/10/26.
Satonaka T, Baba T, Otsuki T, Chikamura T, Meng TH. Object recognition with luminance, rotation and location invariance. In IEEE International Conference on Image Processing. Vol. 3. Los Alamitos, CA, United States: IEEE Comp Soc. 1997. p. 336-339
Satonaka, Takami ; Baba, Takaaki ; Otsuki, Tatsuo ; Chikamura, Takao ; Meng, Teresa H. / Object recognition with luminance, rotation and location invariance. IEEE International Conference on Image Processing. Vol. 3 Los Alamitos, CA, United States : IEEE Comp Soc, 1997. pp. 336-339
@inproceedings{df59b353954f4ee48d3a621a8e035d55,
title = "Object recognition with luminance, rotation and location invariance",
abstract = "In this paper we propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using two-dimensional DCT is improved from 13.6{\%} to 2.4{\%} with the aid of the proposed image synthesis procedure. The 2.4{\%} error rate is better than all previously reported results using Karhunen-Loeve transform convolution networks and eigenface models. In using DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.",
author = "Takami Satonaka and Takaaki Baba and Tatsuo Otsuki and Takao Chikamura and Meng, {Teresa H.}",
year = "1997",
language = "English",
volume = "3",
pages = "336--339",
booktitle = "IEEE International Conference on Image Processing",
publisher = "IEEE Comp Soc",

}

TY - GEN

T1 - Object recognition with luminance, rotation and location invariance

AU - Satonaka, Takami

AU - Baba, Takaaki

AU - Otsuki, Tatsuo

AU - Chikamura, Takao

AU - Meng, Teresa H.

PY - 1997

Y1 - 1997

N2 - In this paper we propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using two-dimensional DCT is improved from 13.6% to 2.4% with the aid of the proposed image synthesis procedure. The 2.4% error rate is better than all previously reported results using Karhunen-Loeve transform convolution networks and eigenface models. In using DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.

AB - In this paper we propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using two-dimensional DCT is improved from 13.6% to 2.4% with the aid of the proposed image synthesis procedure. The 2.4% error rate is better than all previously reported results using Karhunen-Loeve transform convolution networks and eigenface models. In using DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.

UR - http://www.scopus.com/inward/record.url?scp=0031338382&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031338382&partnerID=8YFLogxK

M3 - Conference contribution

VL - 3

SP - 336

EP - 339

BT - IEEE International Conference on Image Processing

PB - IEEE Comp Soc

CY - Los Alamitos, CA, United States

ER -