File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace SP -
dc.citation.conferencePlace Barcelona -
dc.citation.title IEEE International Conference on Acoustics, Speech and Signal Processing -
dc.contributor.author Bhattarai, Binod -
dc.contributor.author Baek, Seungryul -
dc.contributor.author Bodur, Rumeysa -
dc.contributor.author Kim, Tae-Kyun -
dc.date.accessioned 2024-01-31T23:07:14Z -
dc.date.available 2024-01-31T23:07:14Z -
dc.date.created 2020-04-21 -
dc.date.issued 2020-05-04 -
dc.description.abstract Generative Adversarial Networks (GANs) have been used widely to generate large volumes of synthetic data. This data is being utilized for augmenting with real examples in order to train deep Convolutional Neural Networks (CNNs). Studies have shown that the generated examples lack sufficient realism to train deep CNNs and are poor in diversity. Unlike previous studies of randomly augmenting the synthetic data with real data, we present our simple, effective and easy to implement synthetic data sampling methods to train deep CNNs more efficiently and accurately. To this end, we propose to maximally utilize the parameters learned during training of the GAN itself. These include discriminator's realism confidence score and the confidence on the target label of the synthetic data. In addition to this, we explore reinforcement learning (RL) to automatically search a subset of meaningful synthetic examples from a large pool of GAN synthetic data. We evaluate our method on two challenging face attribute classification data sets viz. AffectNet and CelebA. Our extensive experiments clearly demonstrate the need of sampling synthetic data before augmentation, which also improves the performance of one of the state-of-the-art deep CNNs in vitro. -
dc.identifier.bibliographicCitation IEEE International Conference on Acoustics, Speech and Signal Processing -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/78543 -
dc.identifier.url https://arxiv.org/abs/1909.04689 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Sampling Strategies for GAN Synthetic Data -
dc.type Conference Paper -
dc.date.conferenceDate 2020-05-04 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.