Automated feature extraction of face image and its applications

Seiji Kobayashi, Shuji Hashimoto

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    16 Citations (Scopus)

    Abstract

    In this paper, we describe the algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds. The extracted feature points around the eyes, mouth, nose and facial contours are used for modifying facial images. Modified images are stored in the frame memory, and the human speaking scene is generated by continually changing the frames according to input text or speech sound. When speaking voice is input, the vowels are recognised and the corresponding frames are recalled out. This system can be applied not only to the media conversion but also to human-machine interface.

    Original languageEnglish
    Title of host publicationRobot and Human Communication - Proceedings of the IEEE International Workshop
    Pages164-169
    Number of pages6
    Publication statusPublished - 1995
    EventProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN - Tokyo, Jpn
    Duration: 1995 Jul 51995 Jul 7

    Other

    OtherProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN
    CityTokyo, Jpn
    Period95/7/595/7/7

    Fingerprint

    Speech recognition
    Feature extraction
    Acoustic waves
    Data storage equipment

    ASJC Scopus subject areas

    • Hardware and Architecture
    • Software

    Cite this

    Kobayashi, S., & Hashimoto, S. (1995). Automated feature extraction of face image and its applications. In Robot and Human Communication - Proceedings of the IEEE International Workshop (pp. 164-169)

    Automated feature extraction of face image and its applications. / Kobayashi, Seiji; Hashimoto, Shuji.

    Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. p. 164-169.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Kobayashi, S & Hashimoto, S 1995, Automated feature extraction of face image and its applications. in Robot and Human Communication - Proceedings of the IEEE International Workshop. pp. 164-169, Proceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN, Tokyo, Jpn, 95/7/5.
    Kobayashi S, Hashimoto S. Automated feature extraction of face image and its applications. In Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. p. 164-169
    Kobayashi, Seiji ; Hashimoto, Shuji. / Automated feature extraction of face image and its applications. Robot and Human Communication - Proceedings of the IEEE International Workshop. 1995. pp. 164-169
    @inproceedings{9863ec01348f41768d6d0ab2cf5531b6,
    title = "Automated feature extraction of face image and its applications",
    abstract = "In this paper, we describe the algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds. The extracted feature points around the eyes, mouth, nose and facial contours are used for modifying facial images. Modified images are stored in the frame memory, and the human speaking scene is generated by continually changing the frames according to input text or speech sound. When speaking voice is input, the vowels are recognised and the corresponding frames are recalled out. This system can be applied not only to the media conversion but also to human-machine interface.",
    author = "Seiji Kobayashi and Shuji Hashimoto",
    year = "1995",
    language = "English",
    pages = "164--169",
    booktitle = "Robot and Human Communication - Proceedings of the IEEE International Workshop",

    }

    TY - GEN

    T1 - Automated feature extraction of face image and its applications

    AU - Kobayashi, Seiji

    AU - Hashimoto, Shuji

    PY - 1995

    Y1 - 1995

    N2 - In this paper, we describe the algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds. The extracted feature points around the eyes, mouth, nose and facial contours are used for modifying facial images. Modified images are stored in the frame memory, and the human speaking scene is generated by continually changing the frames according to input text or speech sound. When speaking voice is input, the vowels are recognised and the corresponding frames are recalled out. This system can be applied not only to the media conversion but also to human-machine interface.

    AB - In this paper, we describe the algorithm of automated face area segmentation and facial feature extraction from input images with free backgrounds. The extracted feature points around the eyes, mouth, nose and facial contours are used for modifying facial images. Modified images are stored in the frame memory, and the human speaking scene is generated by continually changing the frames according to input text or speech sound. When speaking voice is input, the vowels are recognised and the corresponding frames are recalled out. This system can be applied not only to the media conversion but also to human-machine interface.

    UR - http://www.scopus.com/inward/record.url?scp=0029497808&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0029497808&partnerID=8YFLogxK

    M3 - Conference contribution

    AN - SCOPUS:0029497808

    SP - 164

    EP - 169

    BT - Robot and Human Communication - Proceedings of the IEEE International Workshop

    ER -