skip to main content
Primo Search
Search in: Busca Geral

Stabilizing and Improving Training of Generative Adversarial Networks Through Identity Blocks and Modified Loss Function

Fathallah, Mohamed ; Sakr, Mohamed ; Eletriby, Sherif

Access, IEEE, 2023, Vol.11, p.43276-43285

IEEE

Sem texto completo

Citações Citado por
  • Título:
    Stabilizing and Improving Training of Generative Adversarial Networks Through Identity Blocks and Modified Loss Function
  • Autor: Fathallah, Mohamed ; Sakr, Mohamed ; Eletriby, Sherif
  • Assuntos: Data models ; deep learning ; Generative adversarial network ; Generative adversarial networks ; Generators ; identity block ; label smoothing ; mode collapse ; Optimization ; Smoothing methods ; Training
  • É parte de: Access, IEEE, 2023, Vol.11, p.43276-43285
  • Descrição: Generative adversarial networks (GANs) are a powerful tool for synthesizing realistic images, but they can be difficult to train and are prone to instability and mode collapse. This paper proposes a new model called Identity Generative Adversarial Network (IGAN) that addresses these issues. This model is based on three modifications to the baseline deep convolutional generative adversarial network (DCGAN). The first change is to add a non-linear identity block to the architecture. This will make it easier for the model to fit complex data types and cut down on the time it takes to train. The second change is to smooth out the standard GAN loss function by using a modified loss function and label smoothing. The third and final change is to use minibatch training to let the model use other examples from the same minibatch as side information to improve the quality and variety of generated images. These changes help to stabilize the training process and improve the model's performance. The performance of the GAN models is compared using the inception score (IS) and the Fréchet inception distance (FID), which are widely used metrics for evaluating the quality and diversity of generated images. The effectiveness of our approach was tested by comparing an IGAN model with other GAN models on the CelebA and stacked MNIST datasets. Results show that IGAN outperforms all the other models, achieving an IS of 13.95 and an FID of 43.71 after traning for 200 epochs. In addition to demonstrating the improvement in the performance of the IGAN, the instabilities, diversity, and fidelity of the models were investigated. The results showed that the IGAN was able to converge to a distribution of the real data more quickly. Furthermore, the experiments revealed that IGAN is capable of producing more stable and high-quality images. This suggests that IGAN is a promising approach for improving the training and performance of GANs and may have a range of applications in image synthesis and other areas.
  • Editor: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.