RSGAN: Face swapping and editing using face and hair representation in latent spaces

Ryota Natsume, Tatsuya Yatagawa, Shigeo Morishima

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.

    Original languageEnglish
    Title of host publicationACM SIGGRAPH 2018 Posters, SIGGRAPH 2018
    PublisherAssociation for Computing Machinery, Inc
    ISBN (Print)9781450358170
    DOIs
    Publication statusPublished - 2018 Aug 12
    EventACM SIGGRAPH 2018 Posters - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2018 - Vancouver, Canada
    Duration: 2018 Aug 122018 Aug 16

    Other

    OtherACM SIGGRAPH 2018 Posters - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2018
    CountryCanada
    CityVancouver
    Period18/8/1218/8/16

    Fingerprint

    Neural networks

    Keywords

    • Face
    • Face swapping
    • Image editing
    • Portrait

    ASJC Scopus subject areas

    • Software
    • Computer Graphics and Computer-Aided Design

    Cite this

    Natsume, R., Yatagawa, T., & Morishima, S. (2018). RSGAN: Face swapping and editing using face and hair representation in latent spaces. In ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018 [a69] Association for Computing Machinery, Inc. https://doi.org/10.1145/3230744.3230818

    RSGAN : Face swapping and editing using face and hair representation in latent spaces. / Natsume, Ryota; Yatagawa, Tatsuya; Morishima, Shigeo.

    ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018. Association for Computing Machinery, Inc, 2018. a69.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Natsume, R, Yatagawa, T & Morishima, S 2018, RSGAN: Face swapping and editing using face and hair representation in latent spaces. in ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018., a69, Association for Computing Machinery, Inc, ACM SIGGRAPH 2018 Posters - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2018, Vancouver, Canada, 18/8/12. https://doi.org/10.1145/3230744.3230818
    Natsume R, Yatagawa T, Morishima S. RSGAN: Face swapping and editing using face and hair representation in latent spaces. In ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018. Association for Computing Machinery, Inc. 2018. a69 https://doi.org/10.1145/3230744.3230818
    Natsume, Ryota ; Yatagawa, Tatsuya ; Morishima, Shigeo. / RSGAN : Face swapping and editing using face and hair representation in latent spaces. ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018. Association for Computing Machinery, Inc, 2018.
    @inproceedings{403262625089432e8196a27cfbb1cf54,
    title = "RSGAN: Face swapping and editing using face and hair representation in latent spaces",
    abstract = "This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as {"}region-separative generative adversarial network (RSGAN){"}. In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.",
    keywords = "Face, Face swapping, Image editing, Portrait",
    author = "Ryota Natsume and Tatsuya Yatagawa and Shigeo Morishima",
    year = "2018",
    month = "8",
    day = "12",
    doi = "10.1145/3230744.3230818",
    language = "English",
    isbn = "9781450358170",
    booktitle = "ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018",
    publisher = "Association for Computing Machinery, Inc",

    }

    TY - GEN

    T1 - RSGAN

    T2 - Face swapping and editing using face and hair representation in latent spaces

    AU - Natsume, Ryota

    AU - Yatagawa, Tatsuya

    AU - Morishima, Shigeo

    PY - 2018/8/12

    Y1 - 2018/8/12

    N2 - This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.

    AB - This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.

    KW - Face

    KW - Face swapping

    KW - Image editing

    KW - Portrait

    UR - http://www.scopus.com/inward/record.url?scp=85054809781&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85054809781&partnerID=8YFLogxK

    U2 - 10.1145/3230744.3230818

    DO - 10.1145/3230744.3230818

    M3 - Conference contribution

    AN - SCOPUS:85054809781

    SN - 9781450358170

    BT - ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018

    PB - Association for Computing Machinery, Inc

    ER -