File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이훈

Lee, Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 14497 -
dc.citation.number 10 -
dc.citation.startPage 14484 -
dc.citation.title IEEE INTERNET OF THINGS JOURNAL -
dc.citation.volume 12 -
dc.contributor.author Hwang, Sangwon -
dc.contributor.author Lee, Hoon -
dc.contributor.author Kim, Mintae -
dc.contributor.author Lee, Inkyu -
dc.date.accessioned 2025-06-02T10:00:04Z -
dc.date.available 2025-06-02T10:00:04Z -
dc.date.created 2025-05-30 -
dc.date.issued 2025-05 -
dc.description.abstract This article studies a new multiagent deep reinforcement learning (MADRL) approach for unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) networks, where AAV-mounted servers provide offloading services to mobile users (MUs). We aim to minimize the total energy consumption of MUs by optimizing AAV mobility, AAV-MU association, resource allocation, and task offloading ratios. In the multi-AAV scenario, we model the MEC network as a multiagent partially observable Markov decision process (POMDP), where each AAV agent operates with limited information for decentralized decision-making. Conventional MADRL methods manually design such AAV interaction messages, thereby incurring performance degradation. To address this issue, we propose a new neural network (NN)-based AAV interaction mechanism that generates autonomously task-oriented messages to minimize energy consumption. Such message-generating NNs are developed under the MADRL framework, which allows for joint optimization of AAV interactions and decentralized decisions in an end-to-end manner. Numerical results demonstrate that our approach outperforms traditional MADRL methods and achieves performance close to ideal centralized schemes while maintaining scalability with varying AAV numbers. -
dc.identifier.bibliographicCitation IEEE INTERNET OF THINGS JOURNAL, v.12, no.10, pp.14484 - 14497 -
dc.identifier.doi 10.1109/JIOT.2025.3527016 -
dc.identifier.issn 2372-2541 -
dc.identifier.scopusid 2-s2.0-85214520237 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/87158 -
dc.identifier.wosid 001484707200045 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Multiagent Deep Reinforcement Learning for Decentralized Multi-AAV Mobile Edge Computing Networks -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications -
dc.relation.journalResearchArea Computer Science; Engineering; Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Artificial neural networks -
dc.subject.keywordAuthor Autonomous aerial vehicles -
dc.subject.keywordAuthor Servers -
dc.subject.keywordAuthor Training -
dc.subject.keywordAuthor Scalability -
dc.subject.keywordAuthor Optimization -
dc.subject.keywordAuthor Decision making -
dc.subject.keywordAuthor Trajectory -
dc.subject.keywordAuthor Internet of Things -
dc.subject.keywordAuthor Vehicle dynamics -
dc.subject.keywordAuthor Mobile edge computing (MEC) -
dc.subject.keywordAuthor multiagent deep reinforcement learning (MADRL) -
dc.subject.keywordAuthor unmanned aerial vehicle (UAV) -
dc.subject.keywordPlus RESOURCE-ALLOCATION -
dc.subject.keywordPlus FRAMEWORK -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.