File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김광인

Kim, Kwang In
Machine Learning and Vision Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Improving shape deformation in unsupervised image-to-image translation

Author(s)
Gokaslan, AaronRamanujan, VivekRitchie, DanielKim, Kwang InTompkin, James
Issued Date
2018-09-08
DOI
10.1007/978-3-030-01258-8_40
URI
https://scholarworks.unist.ac.kr/handle/201301/80948
Fulltext
https://link.springer.com/chapter/10.1007%2F978-3-030-01258-8_40
Citation
European Conference on Computer Vision, pp.662 - 678
Abstract
Unsupervised image-to-image translation techniques are able to map local texture between two domains, but they are typically unsuccessful when the domains require larger shape change. Inspired by semantic segmentation, we introduce a discriminator with dilated convolutions that is able to use information from across the entire image to train a more context-aware generator. This is coupled with a multi-scale perceptual loss that is better able to represent error in the underlying shape of objects. We demonstrate that this design is more capable of representing shape deformation in a challenging toy dataset, plus in complex mappings with significant dataset variation between humans, dolls, and anime faces, and between cats and dogs. © Springer Nature Switzerland AG 2018.
Publisher
ECCV 2018
ISSN
0302-9743

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.