<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholarworks.unist.ac.kr/handle/201301/81">
    <title>Repository Collection:</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/81</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91136" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91135" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91133" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91124" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-08T21:48:48Z</dc:date>
  </channel>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91136">
    <title>Task-Aware Quantization Network for JPEG Image Compression</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91136</link>
    <description>Title: Task-Aware Quantization Network for JPEG Image Compression
Author(s): Choi, Jinyoung; Han, Bohyung
Abstract: We propose to learn a deep neural network for JPEG image compression, which predicts image-specific optimized quantization tables fully compatible with the standard JPEG encoder and decoder. Moreover, our approach provides the capability to learn task-specific quantization tables in a principled way by adjusting the objective function of the network. The main challenge to realize this idea is that there exist non-differentiable components in the encoder such as run-length encoding and Huffman coding and it is not straightforward to predict the probability distribution of the quantized image representations. We address these issues by learning a differentiable loss function that approximates bitrates using simple network blocks—two MLPs and an LSTM. We evaluate the proposed algorithm using multiple task-specific losses—two for semantic image understanding and another two for conventional image compression—and demonstrate the effectiveness of our approach to the individual tasks.</description>
    <dc:date>2020-08-22T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91135">
    <title>Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91135</link>
    <description>Title: Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform
Author(s): Song, Myungseo; Choi, Jinyoung; Han, Bohyung
Abstract: We propose a versatile deep image compression network based on Spatial Feature Transform (SFT) [45], which takes a source image and a corresponding quality map as inputs and produce a compressed image with variable rates. Ourmodel covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps. In addition, the proposed framework allows us to perform task-aware image compressions for various tasks, e.g., classification, by efficiently estimating optimized quality maps specific to target tasks for our encoding network. This is even possible with a pretrained network without learning separate models for individual tasks. Our algorithm achieves outstanding rate-distortion trade-off compared to the approaches based on multiple models that are optimized separately for several different target rates. At the same level of compression, the proposed approach successfully improves performance on image classification and text region quality preservation via task-aware quality map estimation without additional model training. The code is available at the project website</description>
    <dc:date>2021-10-10T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91133">
    <title>MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91133</link>
    <description>Title: MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators
Author(s): Choi, Jinyoung; Han, Bohyung
Abstract: We propose a framework of generative adversarial networks with multiple discriminators, which collaborate to represent a real dataset more effectively. Our approach facilitates learning a generator consistent with the underlying data distribution based on real images and thus mitigates the chronic mode collapse problem. From the inspiration of multiple choice learning, we guide each discriminator to have expertise in a subset of the entire data and allow the generator to find reasonable correspondences between the latent and real data spaces automatically without extra supervision for training examples. Despite the use of multiple discriminators, the backbone networks are shared across the discriminators and the increase in training cost is marginal. We demonstrate the effectiveness of our algorithm using multiple evaluation metrics in the standard datasets for diverse tasks.</description>
    <dc:date>2022-11-27T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91124">
    <title>Observation-Guided Diffusion Probabilistic Models</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91124</link>
    <description>Title: Observation-Guided Diffusion Probabilistic Models
Author(s): Kang, Junoh; Choi, Jinyoung; Choi, Sungik; Han, Bohyung
Abstract: We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM), which effectively addresses the trade-off between quality control and fast sampling. Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain in a principled way. This is achieved by introducing an additional loss term derived from the observation based on a conditional discriminator on noise level, which employs a Bernoulli distribution indicating whether its input lies on the (noisy) real manifold or not. This strategy allows us to optimize the more accurate negative log-likelihood induced in the inference stage especially when the number of function evaluations is limited. The proposed training scheme is also advantageous even when incorporated only into the fine-tuning process, and it is compatible with various fast inference strategies since our method yields better denoising networks using the exactly the same inference procedure without incurring extra computational cost. We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines. Our implementation is available at https://github.com/Junoh-Kang/OGDM_edm.</description>
    <dc:date>2024-06-16T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

