Gender classification based on integration of multiple classifiers using various features of facial and neck images

Kazuya Ueki, Tetsunori Kobayashi

    Research output: Contribution to journalArticle

    Abstract

    To reduce the rate of error in gender classification, we propose the use of an integration framework that uses conventional facial images along with neck images. First, images are separated into facial and neck regions, and features are extracted from monochrome, color, and edge images of both regions. Second, we use Support Vector Machines (SVMs) to classify the gender of each individual feature. Finally, we reclassify the gender by considering the six types of distances from the optimal separating hyperplane as a 6-dimensional vector. Experimental results show a 28.4% relative reduction in error over the performance baseline of the monochrome facial image approach, which until now had been considered to have the most accurate performance.

    Original languageEnglish
    Pages (from-to)1803-1809
    Number of pages7
    JournalKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
    Volume61
    Issue number12
    Publication statusPublished - 2007 Dec

      Fingerprint

    Keywords

    • Feature extraction
    • Gender classification
    • Image processing
    • Pattern recognition
    • SVM

    ASJC Scopus subject areas

    • Electronic, Optical and Magnetic Materials
    • Computer Vision and Pattern Recognition

    Cite this