Our NLP lab at UNIST eventually aims to develop a System 2-based language AI that simulates how humans expand and acquire knowledge, ultimately striving to build an Artificial General Language Intelligence (AGLI) intensively equipped with progressive knowledge learning and manipulation capabilities. Given this aim, our research goes beyond System 1 abilities focused on short-term factual recall and aims to endow large language models (LLMs) with System 2-level cognitive skills-such as long-term learning, conceptual understanding, and creative knowledge composition.Particularly, noting that current LLMs are remarkable but still remain inefficient at knowledge injection and manipulation and fall qualitatively short of human-level capabilities, our current interests include:Editing and leveraging knowledge in unstructured textEfficient reasoning based on knowledge learningIntegrating external memory with parameter-efficient LLMsProgressive knowledge expansion via Mixture-of-Experts (MoE) In the mid-term, the lab also aims to develop parametric equivalents of in-context knowledge editing. In the long-term, we seek mechanisms for long-term conceptual learning, ultimately enabling LLM agents to master knowledge at a human level. Overall, we are dedicated to establishing foundational technologies that will drive next-generation language intelligence.