File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

LLM-Guided Communication for Cooperative Multi-Agent Reinforcement Learning

Author(s)
Bae, Sang Jun
Advisor
Han, Seungyul
Issued Date
2026-02
URI
https://scholarworks.unist.ac.kr/handle/201301/91058 http://unist.dcollection.net/common/orgView/200000964795
Abstract
Communication can be essential in cooperative multi-agent reinforcement learning (MARL), where agents may need to overcome partial observability by exchanging information to accomplish tasks. How- ever, prior methods often rely on messages that are uninterpretable or contain irrelevant information. To overcome this issue, we propose LLM-driven Multi-Agent Communication (LMAC), a novel MARL framework that combines LLM-based communication protocol design with a meta-cognitive latent rep- resentation module. LMAC employs iterative refinement with phase-specific feedback to produce in- terpretable protocols that enhance state recovery and shared understanding, while its latent module in- corporates reliability signals with cycle consistency to ensure compact and trustworthy representations. Experiments across diverse MARL benchmarks demonstrate that LMAC consistently improves perfor- mance over other communication baselines.
Publisher
Ulsan National Institute of Science and Technology
Degree
Master
Major
Graduate School of Artificial Intelligence Artificial Intelligence

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.