Communication can be essential in cooperative multi-agent reinforcement learning (MARL), where agents may need to overcome partial observability by exchanging information to accomplish tasks. How- ever, prior methods often rely on messages that are uninterpretable or contain irrelevant information. To overcome this issue, we propose LLM-driven Multi-Agent Communication (LMAC), a novel MARL framework that combines LLM-based communication protocol design with a meta-cognitive latent rep- resentation module. LMAC employs iterative refinement with phase-specific feedback to produce in- terpretable protocols that enhance state recovery and shared understanding, while its latent module in- corporates reliability signals with cycle consistency to ensure compact and trustworthy representations. Experiments across diverse MARL benchmarks demonstrate that LMAC consistently improves perfor- mance over other communication baselines.
Publisher
Ulsan National Institute of Science and Technology
Degree
Master
Major
Graduate School of Artificial Intelligence Artificial Intelligence