Facial Attribute Editing Using Generative Adversarial Network

Authors

  • Mukunda Upadhyay Department of Electronics and Computer Engineering, IOE, Pashchimanchal Campus, Tribhuvan University, Nepal
  • Badri Raj Lamichhane Department of Electronics and Computer Engineering, IOE, Pashchimanchal Campus, Tribhuvan University, Nepal
  • Bal Krishna Nyaupane Department of Electronics and Computer Engineering, IOE, Pashchimanchal Campus, Tribhuvan University, Nepal

DOI:

https://doi.org/10.3126/jes2.v2i1.60394

Keywords:

Adversarial Learning, CGAN, Difference attribute vector, Facial attribute Editing, Feature transfer unit

Abstract

Facial attribute editing tasks have immense applications in today’s digital world, including virtual makeup, generating faces in the animation and gaming industry, social media face image enhancement and improving face recognition systems. This task can be achieved manually or automatically. Manual facial attribute editing, performed with software such as Adobe Photoshop, is a tedious and time-consuming process that requires an expert person. However, Automatic facial attribute editing tasks that can perform facial attribute editing within a few seconds are achievable using encoder-decoder and deep learning-based generative models, such as conditional Generative Adversarial Networks. In our work, we use different attribute vectors as conditional information to generate desired target images, and encoder-decoder structures incorporate feature transfer units to choose and alter encoder-based features. Later, these encoder features are concatenated with the decoder feature to strengthen the attribute editing ability of the model. For this research, we apply reconstruction loss to preserve other details of a face image except target attributes. Adversarial loss is employed for visually realistic editing and attribute manipulation loss is employed to ensure that the generated image possesses the correct attributes. Furthermore, we adopt the WGAN-GP loss function type to improve training stability and reduce the mode collapse problem that often occurs in GAN. Experiments on the Celebi dataset show that this method produces visually realistic facial attribute edited images with PSNR/SSIM 31.7/0.95 and 89.23 % of average attribute editing accuracy for 13 facial attributes including Bangs, Mustache, Bald, Bushy Eyebrows, Blond Hair, Eyeglasses, Black Hair, Brown Hair, Mouth Slightly Open, Male, No Beard, pale Skin and Young.

Downloads

Download data is not yet available.
Abstract
95
PDF
96

Downloads

Published

2023-12-06

How to Cite

Upadhyay, M., Lamichhane, B. R., & Nyaupane, B. K. (2023). Facial Attribute Editing Using Generative Adversarial Network. Journal of Engineering and Sciences, 2(1), 57–63. https://doi.org/10.3126/jes2.v2i1.60394

Issue

Section

Articles