File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

Author(s)
Lee, DongyeunLee, Jae YoungKim, DoyeonChoi, JaehyunYoo, JaejunKim, Junmo
Issued Date
2023-06-20
DOI
10.1109/CVPR52729.2023.01367
URI
https://scholarworks.unist.ac.kr/handle/201301/67774
Citation
IEEE Conference on Computer Vision and Pattern Recognition, pp.14224 - 14234
Abstract
Recent studies show strong generative performance in domain translation especially by using transfer learning techniques on the unconditional generator. However, the control between different domain features using a single model is still challenging. Existing methods often require additional models, which is computationally demanding and leads to unsatisfactory visual quality. In addition, they have restricted control steps, which prevents a smooth transition. In this paper, we propose a new approach for high-quality domain translation with better controllability. The key idea is to preserve source features within a disentangled subspace of a target feature space. This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model. Our extensive experiments show that the proposed method can produce more consistent and realistic images than previous works and maintain precise controllability over different levels of transformation. The code is available at LeeDongYeun/FixNoise.
Publisher
IEEE Computer Society

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.