File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김태환

Kim, Taehwan
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Esther1: Extremely simple image translation through self-regularization

Author(s)
Yang, ChaoKim, TaehwanWang, RuizhePeng, HaoJay, Kuo C.-C.
Issued Date
2019-09
URI
https://scholarworks.unist.ac.kr/handle/201301/79312
Citation
British Machine Vision Conference
Abstract
Image translation between two domains is a class of problems where the goal is to learn the mapping from an input image in the source domain to an output image in the target domain. It has important applications such as data augmentation, domain adaptation, and unsupervised training. When paired training data are not accessible, the mapping between the two domains is highly under-constrained and we are faced with an ill-posed task. Existing approaches tackling this challenge usually make assumptions and introduce prior constraints. For example, CycleGAN [59] assumes cycle-consistency while UNIT [31] assumes shared latent-space between the two domains. We argue that none of these assumptions explicitly guarantee that the learned mapping is the desired one. We, taking a step back, observe that most image translations are based on the intuitive requirement that the translated image needs to be perceptually similar to the original image and also appear to come from the new domain. On the basis of such observation, we propose an extremely simple yet effective image translation approach, which consists of a single generator and is trained with a self-regularization term and an adversarial term. We further propose an adaptive method to search for the best weight between the two terms. Extensive experiments and evaluations show that our model is significantly more cost-effective and can be trained under budget, yet easily achieves better performance than other methods on a broad range of tasks and applications.
Publisher
BMVA Press

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.