This thesis explores methods for learning to achieve goals through curriculum and hierarchical reinforcement learning (RL). As intelligent agents increasingly engage with complex, real-world environments, their ability to autonomously learn and adapt skills without extensive human intervention becomes crucial. Traditional RL approaches, however, often depend on hand- engineered curricula and reward functions, limiting their efficiency and adaptability. This thesis addresses these limitations by introducing novel approaches that leverage curriculum learning and hierarchical structures to improve skill acquisition and deployment in goal-conditioned RL. The aforementioned problems are both important and challenging. As agents increasingly engage in complex and dynamic environments, the need for autonomous skill acquisition and adaptive deployment becomes crucial. Without the ability to learn and adapt without human intervention, the scalability and real-world applicability of RL agents are severely restricted. Efficient goal achievement is further hindered by the complexity of environmental interactions and the diverse requirements of different tasks. Therefore, it is essential to develop methods allowing agents to autonomously acquire a diverse set of skills and adapt them dynamically to various situations. This thesis aims to enhance goal-conditioned reinforcement learning by integrating curricu- lum and hierarchical learning methods, enabling agents to autonomously and efficiently achieve goals in complex, real-world environments. The contributions include the development of Varia- tional Curriculum Reinforcement Learning (VCRL) and Value Uncertainty Variational Curricu- lum (VUVC) for effective unsupervised goal achievement and curriculum learning, as well as the creation of a hierarchical learning framework for adaptive and explainable skill deployment.
Publisher
Ulsan National Institute of Science and Technology