skip to main content
Primo Search
Search in: Busca Geral
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

Semantic Diversity Image Translation Based on Deep Feature Difference and Attention Mechanism

Pimenidis, Elias ; Angelov, Plamen ; Jayne, Chrisina ; Papaleonidas, Antonios ; Aydin, Mehmet

Artificial Neural Networks and Machine Learning - ICANN 2022, 2022, Vol.13531, p.261-272 [Periódico revisado por pares]

Switzerland: Springer

Sem texto completo

Citações Citado por
  • Título:
    Semantic Diversity Image Translation Based on Deep Feature Difference and Attention Mechanism
  • Autor: Pimenidis, Elias ; Angelov, Plamen ; Jayne, Chrisina ; Papaleonidas, Antonios ; Aydin, Mehmet
  • Assuntos: Deep feature difference ; Image-to-image translation ; Semantic diversity
  • É parte de: Artificial Neural Networks and Machine Learning - ICANN 2022, 2022, Vol.13531, p.261-272
  • Descrição: To improve the semantic diversity and visual authenticity of image translation, in this paper we propose Generative Adversarial Networks based on deep feature difference and attention mechanism. In our model, we employ a pre-trained image classification network to extract different level features of the generated images and then calculate the perceptual similarity based on these features. Such perceptual similarities are introduced into the adversarial learning to control the differences in high-level semantic features. Furthermore, in order to ensure the quality of the translated image, the residual network structure with the attention mechanism is utilized in our model. According to the importance of each feature channel, the different attention is strengthened on the useful features. In addition, we not only employ the discriminator on the generated images to discriminate whether the images are real or fake, but also make a discrimination on the reconstructed images. The main contributions of this paper are as follows: (1) Previous studies on the diversity of image translation mainly focus on the low-level features of the image, such as hair color and skin color. Our method focuses on semantic differences through deep features, with the aim of improving the semantic diversity of images in the styles. (2) Our model adopts the channel-attention mechanism and reconstructed-image discriminator, which not only highlight the feature details of the generated image, but also ensure the translated images as real as possible. The results on Celeba-HQ dataset verify the superiority of our model on both visual quality and semantic diversity.
  • Títulos relacionados: Lecture Notes in Computer Science
  • Editor: Switzerland: Springer
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.