File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 8588 -
dc.citation.number 9 -
dc.citation.startPage 8576 -
dc.citation.title IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY -
dc.citation.volume 34 -
dc.contributor.author Park, Eunpil -
dc.contributor.author Yoo, Jaejun -
dc.contributor.author Sim, Jae-Young -
dc.date.accessioned 2024-09-23T09:35:06Z -
dc.date.available 2024-09-23T09:35:06Z -
dc.date.created 2024-09-22 -
dc.date.issued 2024-09 -
dc.description.abstract Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, they suffer from the limited performance on arbitrary hazy images with diverse characteristics due to the lack of universal training dataset. In this paper, we first propose a fully data-driven learning-based framework for universal dehazing based on the haze style transfer (HST). We define multiple domains of haze styles by applying the K-means clustering to the background light of diverse real hazy images. We design the haze style modulator to extract the scene radiance features and the haze-related features, respectively. We employ the unpaired image-to-image translation methodology to transfer a source hazy image into different hazy images with diverse styles while preserving the scene radiance. The generated diverse hazy images are used to train the universal dehazing network in a semi-supervised manner, where we implement the dehazing as a special instance of HST into no haze style. The experimental results show that the proposed framework reliably generates realistic and diverse hazy images, and achieves better performance of universal dehazing regardless of the haze styles compared with the existing state-of-the art dehazing methods. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, v.34, no.9, pp.8576 - 8588 -
dc.identifier.doi 10.1109/TCSVT.2024.3386738 -
dc.identifier.issn 1051-8215 -
dc.identifier.scopusid 2-s2.0-85190172241 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/83852 -
dc.identifier.wosid 001409508700054 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Universal Dehazing via Haze Style Transfer -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Attenuation -
dc.subject.keywordAuthor DH-HEMTs -
dc.subject.keywordAuthor Image color analysis -
dc.subject.keywordAuthor Feature extraction -
dc.subject.keywordAuthor Learning systems -
dc.subject.keywordAuthor Light sources -
dc.subject.keywordAuthor Training -
dc.subject.keywordPlus Image dehazing -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.