File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

나형호

Na, Hyungho
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace AU -
dc.citation.title International Conference on Learning Representations -
dc.contributor.author Na, Hyungho -
dc.contributor.author Seo, Yunkyeong -
dc.contributor.author Moon, Il-Chul -
dc.date.accessioned 2026-04-09T15:00:11Z -
dc.date.available 2026-04-09T15:00:11Z -
dc.date.created 2026-04-09 -
dc.date.issued 2024-05-08 -
dc.description.abstract In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still require significant learning time and often get trapped in local optima by complex tasks, subsequently failing to discover a goal-reaching policy. To address this, we introduce Efficient episodic Memory Utilization (EMU) for MARL, with two primary objectives: (a) accelerating reinforcement learning by leveraging semantically coherent memory from an episodic buffer and (b) selectively promoting desirable transitions to prevent local convergence. To achieve (a), EMU incorporates a trainable encoder/decoder structure alongside MARL, creating coherent memory embeddings that facilitate exploratory memory recall. To achieve (b), EMU introduces a novel reward structure called episodic incentive based on the desirability of states. This reward improves the TD target in Q-learning and acts as an additional incentive for desirable transitions. We provide theoretical support for the proposed incentive and demonstrate the effectiveness of EMU compared to conventional episodic control. The proposed method is evaluated in StarCraft II and Google Research Football, and empirical results indicate further performance improvement over state-of-the-art methods. Our code is available at: https://github.com/HyunghoNa/EMU. -
dc.identifier.bibliographicCitation International Conference on Learning Representations -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/91320 -
dc.language 영어 -
dc.publisher International Conference on Learning Representations -
dc.title Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning -
dc.type Conference Paper -
dc.date.conferenceDate 2024-05-07 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.