File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Rethinking the Truly Unsupervised Image-to-Image Translation

Author(s)
Baek, KyungjuneChoi, YunjeyUh, YoungjungYoo, JaejunShim, Hyunjung
Issued Date
2021-10-17
DOI
10.1109/ICCV48922.2021.01389
URI
https://scholarworks.unist.ac.kr/handle/201301/76877
Fulltext
https://ieeexplore.ieee.org/document/9709944
Citation
IEEE International Conference on Computer Vision, pp.14134 - 14143
Abstract
Every recent image-to-image translation model inherently requires either image-level (i.e. input-output pairs) or set-level (i.e. domain labels) supervision. However, even set-level supervision can be a severe bottleneck for data collection in practice. In this paper, we tackle image-to-image translation in a fully unsupervised setting, i.e., neither paired images nor domain labels. To this end, we propose a truly unsupervised image-to-image translation model (TUNIT) that simultaneously learns to separate image domains and translates input images into the estimated domains. Experimental results show that our model achieves comparable or even better performance than the set-level supervised model trained with full labels, generalizes well on various datasets, and is robust against the choice of hyperparameters (e.g. the preset number of pseudo domains). Furthermore, TUNIT can be easily extended to semi-supervised learning with a few labeled data.
Publisher
Institute of Electrical and Electronics Engineers Inc.

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.