| dc.citation.conferencePlace |
AU |
- |
| dc.citation.title |
International Conference on Learning Representations |
- |
| dc.contributor.author |
Na, Hyungho |
- |
| dc.contributor.author |
Moon, Il-Chul |
- |
| dc.date.accessioned |
2026-04-09T15:00:08Z |
- |
| dc.date.available |
2026-04-09T15:00:08Z |
- |
| dc.date.created |
2026-04-09 |
- |
| dc.date.issued |
2024-07-22 |
- |
| dc.description.abstract |
In cooperative multi-agent reinforcement learning (MARL), agents collaborate to achieve common goals, such as defeating enemies and scoring a goal. However, learning goal-reaching paths toward such a semantic goal takes a considerable amount of time in complex tasks and the trained model often fails to find such paths. To address this, we present LAtent Goal-guided Multi-Agent reinforcement learning (LAGMA), which generates a goal-reaching trajectory in latent space and provides a latent goal-guided incentive to transitions toward this reference trajectory. LAGMA consists of three major components: (a) quantized latent space constructed via a modified VQ-VAE for efficient sample utilization, (b) goal-reaching trajectory generation via extended VQ codebook, and (c) latent goal-guided intrinsic reward generation to encourage transitions towards the sampled goal-reaching path. The proposed method is evaluated by StarCraft II with both dense and sparse reward settings and Google Research Football. Empirical results show further performance improvement over state-of-the-art baselines. |
- |
| dc.identifier.bibliographicCitation |
International Conference on Learning Representations |
- |
| dc.identifier.uri |
https://scholarworks.unist.ac.kr/handle/201301/91319 |
- |
| dc.language |
영어 |
- |
| dc.publisher |
International Conference on Machine Learning |
- |
| dc.title |
LAGMA: LAtent Goal-guided Multi-agent Reinforcement Learning |
- |
| dc.type |
Conference Paper |
- |
| dc.date.conferenceDate |
2024-07-21 |
- |