File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이훈

Lee, Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 38053 -
dc.citation.number 23 -
dc.citation.startPage 38040 -
dc.citation.title IEEE INTERNET OF THINGS JOURNAL -
dc.citation.volume 11 -
dc.contributor.author Kim, Mintae -
dc.contributor.author Lee, Hoon -
dc.contributor.author Hwang, Sangwon -
dc.contributor.author Debbah, Merouane -
dc.contributor.author Lee, Inkyu -
dc.date.accessioned 2024-10-10T15:35:05Z -
dc.date.available 2024-10-10T15:35:05Z -
dc.date.created 2024-10-10 -
dc.date.issued 2024-12 -
dc.description.abstract This paper presents a cooperative multi-agent deep reinforcement learning (MADRL) approach for unmmaned aerial vehicle (UAV)-aided mobile edge computing (MEC) networks. An UAV with computing capability can provide task offlaoding services to ground internet-of-things devices (IDs). With partial observation of the entire network state, the UAV and the IDs individually determine their MEC strategies, i.e., UAV trajectory, resource allocation, and task offloading policy. This requires joint optimization of decision-making process and coordination strategies among the UAV and the IDs. To address this difficulty, the proposed cooperative MADRL approach computes two types of action variables, namely message action and solution action, each of which is generated by dedicated actor neural networks (NNs). As a result, each agent can automatically encapsulate its coordination messages to enhance the MEC performance in the decentralized manner. The proposed actor structure is designed based on graph attention networks such that operations are possible regardless of the number of IDs. A scalable training algorithm is also proposed to train a group of NNs for arbitrary network configurations. Numerical results demonstrate the superiority of the proposed cooperative MADRL approach over conventional methods. IEEE -
dc.identifier.bibliographicCitation IEEE INTERNET OF THINGS JOURNAL, v.11, no.23, pp.38040 - 38053 -
dc.identifier.doi 10.1109/JIOT.2024.3447090 -
dc.identifier.issn 2327-4662 -
dc.identifier.scopusid 2-s2.0-85201772219 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/84045 -
dc.identifier.wosid 001360506300016 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Cooperative Multi-Agent Deep Reinforcement Learning Methods for UAV-Aided Mobile Edge Computing Networks -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science -
dc.relation.journalResearchArea Computer Science;Engineering;Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Task analysis -
dc.subject.keywordAuthor Training -
dc.subject.keywordAuthor Trajectory -
dc.subject.keywordAuthor UAV mobile edge computing -
dc.subject.keywordAuthor Artificial neural networks -
dc.subject.keywordAuthor Autonomous aerial vehicles -
dc.subject.keywordAuthor Optimization -
dc.subject.keywordAuthor Graph attention network -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Servers -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.