Abstract
This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
Original language | English |
---|---|
Title of host publication | Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers |
Editors | Greg Mori, Hongdong Li, C.V. Jawahar, Konrad Schindler |
Publisher | Springer-Verlag |
Pages | 117-132 |
Number of pages | 16 |
ISBN (Print) | 9783030208752 |
DOIs | |
Publication status | Published - 2019 Jan 1 |
Event | 14th Asian Conference on Computer Vision, ACCV 2018 - Perth, Australia Duration: 2018 Dec 2 → 2018 Dec 6 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 11366 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 14th Asian Conference on Computer Vision, ACCV 2018 |
---|---|
Country | Australia |
City | Perth |
Period | 18/12/2 → 18/12/6 |
Fingerprint
Keywords
- Convolutional neural networks
- Deep generative models
- Face swapping
ASJC Scopus subject areas
- Theoretical Computer Science
- Computer Science(all)
Cite this
FSNet : An Identity-Aware Generative Model for Image-Based Face Swapping. / Natsume, Ryota; Yatagawa, Tatsuya; Morishima, Shigeo.
Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers. ed. / Greg Mori; Hongdong Li; C.V. Jawahar; Konrad Schindler. Springer-Verlag, 2019. p. 117-132 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11366 LNCS).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - FSNet
T2 - An Identity-Aware Generative Model for Image-Based Face Swapping
AU - Natsume, Ryota
AU - Yatagawa, Tatsuya
AU - Morishima, Shigeo
PY - 2019/1/1
Y1 - 2019/1/1
N2 - This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
AB - This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
KW - Convolutional neural networks
KW - Deep generative models
KW - Face swapping
UR - http://www.scopus.com/inward/record.url?scp=85066959271&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85066959271&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-20876-9_8
DO - 10.1007/978-3-030-20876-9_8
M3 - Conference contribution
AN - SCOPUS:85066959271
SN - 9783030208752
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 117
EP - 132
BT - Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Mori, Greg
A2 - Li, Hongdong
A2 - Jawahar, C.V.
A2 - Schindler, Konrad
PB - Springer-Verlag
ER -