File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

StarGAN v2: Diverse Image Synthesis for Multiple Domains

Author(s)
Choi, YunjeyUh, YoungjungYoo, JaejunHa, Jung-Woo
Issued Date
2020-08
DOI
10.1109/CVPR42600.2020.00821
URI
https://scholarworks.unist.ac.kr/handle/201301/78348
Fulltext
https://ieeexplore.ieee.org/document/9157662
Citation
IEEE Conference on Computer Vision and Pattern Recognition, pp.8185 - 8194
Abstract
A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter-and intra-domain differences. The code, pretrained models, and dataset are available at https://github.com/clovaai/stargan-v2.
Publisher
IEEE Computer Society
ISSN
1063-6919

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.