主 题: 第25期学术午餐会报名通知
报告人: 15级硕士肖泰洪 (北京大学)
时 间: 2017-11-01 12:00-13:30
地 点: 理科一号楼1560
报告题目 [Title]：GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data
报告摘要[Abstract]：Object Transfiguration generates diverse novel images by replacing an object in the given image with particular objects from exemplar images. It offers fine-grained controls of image generation, and can perform tasks like “put exactly those eyeglasses from image A onto the nose of the person in image B”. However, object transfiguration often requires disentanglement of objects from backgrounds in feature space, which is challenging and previously requires learning from paired training data: two images sharing the same background but with different objects. In this work, we propose a deterministic generative model that learns disentangled feature subspaces by adversarial training. The training data are two unpaired sets of images: a positive set containing images that have some kind of object, and a negative set being the opposite. The model encodes an image into two complement features: one for the object, and the other for the background. The object and background features from a “positive” parent and a “negative” parent, can be recombined to produce four children, of which two are exact reproductions, and the other two are crossbreeds. Minimizing the adversarial loss between crossbreeds and parents will ensure the crossbreeds inherit the specific objects of parents. On the other hand, minimizing the reconstruction loss between reproductions and parents can ensure the completeness of the features. Overall, the object and background features are complete and disentangled representations of images. Moreover, the object features are found to constitute a multidimensional attribute subspace. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, for generating images with specified eyeglasses, smiling, hair styles, and lighting conditions.