魔法纪录吧 关注:28,245贴子:1,209,486
  • 14回复贴,共1
Li B, Zhu Y, Wang Y, et al. AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation[J]. arXiv preprint arXiv:2102.12593, 2021


来自手机贴吧1楼2021-12-02 22:15回复


    IP属地:广东来自Android客户端2楼2021-12-02 22:50
    收起回复
      环子哥?
      ----我想我不需要短暂的冷淡和过期不候的温暖。


      IP属地:江苏来自Android客户端3楼2021-12-02 23:12
      回复
        妈呀(字面意)


        IP属地:广东来自Android客户端4楼2021-12-02 23:51
        回复
          基于生成式对抗网络的无监督冻鳗脸部生成


          IP属地:上海来自Android客户端5楼2021-12-03 16:08
          回复
            你游要🔥()


            IP属地:广东来自Android客户端6楼2021-12-04 00:50
            回复
              发表在哪个期刊的?有摘要吗?


              来自Android客户端7楼2021-12-04 08:50
              回复
                居然还是论文


                IP属地:江苏来自Android客户端8楼2021-12-04 11:01
                回复
                  这个看着应该是预印版还没发表吧


                  IP属地:福建来自Android客户端9楼2021-12-04 14:08
                  回复
                    你游要火


                    IP属地:上海来自Android客户端10楼2021-12-04 14:55
                    回复
                      来个摘要


                      IP属地:江苏来自Android客户端11楼2021-12-04 20:33
                      回复
                        学姐我的榨汁机


                        IP属地:山西来自Android客户端12楼2021-12-04 21:44
                        回复
                          摘要来了
                          In this paper, we propose a novel framework to translate a portrait photo-face into an anime appearance. Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face. However, unlike typical translation tasks, such anime-face translation is challenging due to complex variations of appearances among anime-faces. Existing methods often fail to transfer the styles of reference anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated faces. We propose Ani- GAN, a novel GAN-based translator that synthesizes highquality anime-faces. Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face, while preserving the global structure of the source photoface. We propose a double-branch discriminator to learn both domain-specific distributions and domain-shared distributions, helping generate visually pleasing anime-faces and effectively mitigate artifacts. Extensive experiments qualitatively and quantitatively demonstrate the superiority of our method over state-of-the-art methods.
                          在这篇论文中,我们提出一个新颖的框架来将肖像照片中的人脸转换成动画形象。我们的目的是合成风格一致的动画脸与一个给定的参考动画脸。然而,与传统的翻译任务不同的是,这种动画脸的翻译具有挑战性,因为动画脸之间存在着复杂的外观变化。现有的方法往往不能转移参考动画脸的风格,或者在生成的脸的局部形状中引入明显的人为因素/扭曲。我们提出了一种基于GAN的新型翻译器Ani- GAN,它可以合成高质量的动画面孔。具体地说,提出了一种新的生成器结构,在保持源图像全局结构的同时,将颜色/纹理样式和基于参考动画脸风格的局部面部形状转换为类似于动画的面部形状。我们提出了一个双分支鉴别器来学习领域特定的分布和领域共享的分布,帮助生成视觉上令人愉悦的动画面孔和有效地缓解人为干扰。大量的实验定性和定量地证明了我们的方法优于最先进的方法。


                          IP属地:福建来自Android客户端13楼2021-12-05 16:01
                          回复
                            “Novel framework” , “GAN”


                            来自iPhone客户端14楼2021-12-07 09:01
                            回复