There are no files associated with this item.
Cited time in
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.citation.title | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS | - |
| dc.contributor.author | Park, Jeonghun | - |
| dc.contributor.author | Yoon, Sung Whan | - |
| dc.date.accessioned | 2026-01-05T14:31:59Z | - |
| dc.date.available | 2026-01-05T14:31:59Z | - |
| dc.date.created | 2026-01-02 | - |
| dc.date.issued | 2025-12 | - |
| dc.description.abstract | Recently, semantic communications have drawn great attention as the groundbreaking concept surpasses the limited capacity of Shannon’s theory. Specifically, semantic communications are likely to become crucial in realizing visual tasks that demand massive network traffic. Although highly distinctive forms of visual semantics exist for computer vision tasks, a thorough investigation of what visual semantics can be transmitted in time and which one is required for completing different visual tasks has not yet been reported. To this end, we first scrutinize the achievable throughput in transmitting existing visual semantics through the limited wireless communication bandwidth. In addition, we further demonstrate the resulting performance of various visual tasks for each visual semantic. Based on the empirical testing, we suggest a task-adaptive selection of visual semantics is crucial for real-time semantic communications for visual tasks, where we transmit basic semantics (e.g., objects in the given image) for simple visual tasks, such as classification, and richer semantics (e.g., scene graphs) for complex tasks, such as image regeneration. To further improve transmission efficiency, we suggest a filtering method for scene graphs, which drops redundant information in the scene graph, thus allowing the sending of essential semantics for completing the given task.We confirm the efficacy of our task-adaptive semantic communication approach through extensive simulations in wireless channels, showing more than 45 times larger throughput over a naive transmission of original data. Our work can be reproduced at the following source codes: https://github.com/jhpark2024/jhpark.github.io. | - |
| dc.identifier.bibliographicCitation | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS | - |
| dc.identifier.doi | 10.1109/JSAC.2025.3623159 | - |
| dc.identifier.issn | 0733-8716 | - |
| dc.identifier.scopusid | 2-s2.0-105019952975 | - |
| dc.identifier.uri | https://scholarworks.unist.ac.kr/handle/201301/89782 | - |
| dc.identifier.url | https://ieeexplore.ieee.org/document/11207653 | - |
| dc.language | 영어 | - |
| dc.publisher | IEEE | - |
| dc.title | Transmit What You Need: Task-Adaptive Semantic Communications for Visual Information | - |
| dc.type | Article | - |
| dc.description.isOpenAccess | FALSE | - |
| dc.type.docType | Article | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Tel : 052-217-1403 / Email : scholarworks@unist.ac.kr
Copyright (c) 2023 by UNIST LIBRARY. All rights reserved.
ScholarWorks@UNIST was established as an OAK Project for the National Library of Korea.