File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

윤성환

Yoon, Sung Whan
Machine Intelligence and Information Learning Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 4197 -
dc.citation.number 12 -
dc.citation.startPage 4182 -
dc.citation.title IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS -
dc.citation.volume 43 -
dc.contributor.author Park, Jeonghun -
dc.contributor.author Yoon, Sung Whan -
dc.date.accessioned 2026-01-05T14:31:59Z -
dc.date.available 2026-01-05T14:31:59Z -
dc.date.created 2026-01-02 -
dc.date.issued 2025-12 -
dc.description.abstract Recently, semantic communications have drawn great attention as the groundbreaking concept surpasses the limited capacity of Shannon’s theory. Specifically, semantic communications are likely to become crucial in realizing visual tasks that demand massive network traffic. Although highly distinctive forms of visual semantics exist for computer vision tasks, a thorough investigation of what visual semantics can be transmitted in time and which one is required for completing different visual tasks has not yet been reported. To this end, we first scrutinize the achievable throughput in transmitting existing visual semantics through the limited wireless communication bandwidth. In addition, we further demonstrate the resulting performance of various visual tasks for each visual semantic. Based on the empirical testing, we suggest a task-adaptive selection of visual semantics is crucial for real-time semantic communications for visual tasks, where we transmit basic semantics (e.g., objects in the given image) for simple visual tasks, such as classification, and richer semantics (e.g., scene graphs) for complex tasks, such as image regeneration. To further improve transmission efficiency, we suggest a filtering method for scene graphs, which drops redundant information in the scene graph, thus allowing the sending of essential semantics for completing the given task.We confirm the efficacy of our task-adaptive semantic communication approach through extensive simulations in wireless channels, showing more than 45 times larger throughput over a naive transmission of original data. Our work can be reproduced at the following source codes: https://github.com/jhpark2024/jhpark.github.io. -
dc.identifier.bibliographicCitation IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, v.43, no.12, pp.4182 - 4197 -
dc.identifier.doi 10.1109/JSAC.2025.3623159 -
dc.identifier.issn 0733-8716 -
dc.identifier.scopusid 2-s2.0-105019952975 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/89782 -
dc.identifier.url https://ieeexplore.ieee.org/document/11207653 -
dc.identifier.wosid 001687337100014 -
dc.language 영어 -
dc.publisher IEEE -
dc.title Transmit What You Need: Task-Adaptive Semantic Communications for Visual Information -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Engineering, Electrical & Electronic, Telecommunications -
dc.relation.journalResearchArea Engineering, Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Visualization -
dc.subject.keywordAuthor Semantic communication -
dc.subject.keywordAuthor Feature extraction -
dc.subject.keywordAuthor Semantic segmentation -
dc.subject.keywordAuthor Image coding -
dc.subject.keywordAuthor Decoding -
dc.subject.keywordAuthor Computer visionImage reconstruction -
dc.subject.keywordAuthor Throughput -
dc.subject.keywordAuthor Generative AI -
dc.subject.keywordAuthor Semantic communications -
dc.subject.keywordAuthor communications for computer vision -
dc.subject.keywordAuthor scene graphs -
dc.subject.keywordAuthor generative models -
dc.subject.keywordPlus QUALITY -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.