Li B, Zhu Y, Wang Y, et al. AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation[J]. arXiv preprint arXiv:2102.12593, 2021
摘要来了 In this paper, we propose a novel framework to translate a portrait photo-face into an anime appearance. Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face. However, unlike typical translation tasks, such anime-face translation is challenging due to complex variations of appearances among anime-faces. Existing methods often fail to transfer the styles of reference anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated faces. We propose Ani- GAN, a novel GAN-based translator that synthesizes highquality anime-faces. Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face, while preserving the global structure of the source photoface. We propose a double-branch discriminator to learn both domain-specific distributions and domain-shared distributions, helping generate visually pleasing anime-faces and effectively mitigate artifacts. Extensive experiments qualitatively and quantitatively demonstrate the superiority of our method over state-of-the-art methods. 在这篇论文中,我们提出一个新颖的框架来将肖像照片中的人脸转换成动画形象。我们的目的是合成风格一致的动画脸与一个给定的参考动画脸。然而,与传统的翻译任务不同的是,这种动画脸的翻译具有挑战性,因为动画脸之间存在着复杂的外观变化。现有的方法往往不能转移参考动画脸的风格,或者在生成的脸的局部形状中引入明显的人为因素/扭曲。我们提出了一种基于GAN的新型翻译器Ani- GAN,它可以合成高质量的动画面孔。具体地说,提出了一种新的生成器结构,在保持源图像全局结构的同时,将颜色/纹理样式和基于参考动画脸风格的局部面部形状转换为类似于动画的面部形状。我们提出了一个双分支鉴别器来学习领域特定的分布和领域共享的分布,帮助生成视觉上令人愉悦的动画面孔和有效地缓解人为干扰。大量的实验定性和定量地证明了我们的方法优于最先进的方法。