<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholarworks.unist.ac.kr/handle/201301/43">
    <title>Repository Community:</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/43</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91691" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91620" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91321" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91320" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-13T08:11:47Z</dc:date>
  </channel>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91691">
    <title>SAIGE-GPU: accelerating genome- and phenome-wide association studies using GPUs</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91691</link>
    <description>Title: SAIGE-GPU: accelerating genome- and phenome-wide association studies using GPUs
Author(s): Rodriguez, Alex; Kim, Youngdae; Nandi, Tarak Nath; Keat, Karl; Kumar, Rachit; Conery, Mitchell; Bhukar, Rohan; Liu, Molei; Hessington, John; Maheshwari, Ketan; Begoli, Edmon; Tourassi, Georgia; Natarajan, Pradeep; Voight, Benjamin F.; Gaziano, John Michael; Damrauer, Scott M.; Liao, Katherine P.; Zhou, Wei; Huffman, Jennifer E.; Verma, Anurag; Madduri, Ravi K.
Abstract: Motivation Genome-wide association studies (GWAS) at biobank scale are computationally intensive, especially for admixed populations requiring robust statistical models. SAIGE is a widely used method for generalized linear mixed-model GWAS but is limited by its CPU-based implementation, making phenome-wide association studies impractical for many research groups.Results We developed SAIGE-GPU, a GPU-accelerated version of SAIGE that replaces CPU-intensive matrix operations with GPU-optimized kernels. The core innovation is distributing genetic relationship matrix calculations across GPUs and communication layers. Applied to 2068 phenotypes from 635 969 participants in the Million Veteran Program, including diverse and admixed populations, SAIGE-GPU achieved a 5-fold speedup in mixed model fitting on supercomputing infrastructure and cloud platforms. We further optimized the variant association testing step through multi-core and multi-trait parallelization. Deployed on Google Cloud Platform and Azure, the method provided substantial cost and time savings.Availability and implementation Source code and binaries are available for download at https://github.com/saigegit/SAIGE/tree/SAIGE-GPU-1.3.3. A code snapshot is archived at Zenodo for reproducibility (DOI: [10.5281/zenodo.17642591]). SAIGE-GPU is available in a containerized format for use across HPC and cloud environments and is implemented in R/C++ and runs on Linux systems.</description>
    <dc:date>2026-02-28T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91620">
    <title>Multi-Agent Reinforcement Learning Considering Agent Priority for Weapon-Target Assignment</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91620</link>
    <description>Title: Multi-Agent Reinforcement Learning Considering Agent Priority for Weapon-Target Assignment
Author(s): Na, Hyungho; Ahn, Jaemyung; Moon, Il-Chul
Abstract: This paper presents a novel multi-agent reinforcement learning (MARL) approach that incorporates agent priorities to address weapon-target assignment (WTA) with constraints, such as heterogeneous engagement time windows. The proposed approach begins by defining the decentralized Markov decision process (Dec-MDP) formulation for WTA involving heterogeneous, multiple agents. Our approach employs a hierarchical structure for MARL training, comprising an agent selector and a target selector, which sequentially determine the order of agents for assignment, i.e., preferred shooter selection and target selection. Through experimental designs, the proposed model demonstrates its ability to generate high-quality assignment plans within a short execution time. The model demonstrates superior performance across various scenarios, achieving the lowest threat survivability with a clear advantage over other baseline methods, especially in tightly constrained scenarios. Ablation studies and qualitative analyses are conducted to illustrate the influence of key components on performance, and these qualitative studies reveal the learning mechanism in agent and target selection. Additionally, transferability tests confirm the model's applicability to unseen problem cases, where training and testing environments are different, indicating its potential for real-world adaptation in various scenarios.</description>
    <dc:date>2026-03-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91321">
    <title>Optimal Coasting Time Determination of a Multi-stage Interceptor Considering Engagement Zone</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91321</link>
    <description>Title: Optimal Coasting Time Determination of a Multi-stage Interceptor Considering Engagement Zone
Author(s): Na, Hyungho; Sung, Taehyun; Ahn, Jaemyung
Abstract: This paper proposes a methodology to optimally determine the coasting time of a multi-stage interceptor, considering the engagement zone. Proper coasting time determination is critical for a multi-stage interceptor to extend its engagement boundaries and to engage with a target at a specified engagement point at the estimated impact time. Hence, we first define the optimization problem to determine multiple coasting times for a multi-stage interceptor, considering both radar detection range and potential homing performance. The analytic formulation for the generalized coasting time determination is derived by introducing the ratio of each coasting time over a reference coasting time. With this coasting ratio, the original optimal coasting time determination problem can be simplified to an alternative problem of finding the optimal ratio and the reference coasting time. In addition, by considering the practical implementation, we present the bi-level approach utilizing the solution of the dual problem to solve the alternative problem. Various case studies are carried out to evaluate the proposed method and show its effectiveness and validity.</description>
    <dc:date>2023-01-25T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91320">
    <title>Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91320</link>
    <description>Title: Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning
Author(s): Na, Hyungho; Seo, Yunkyeong; Moon, Il-Chul
Abstract: In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still require significant learning time and often get trapped in local optima by complex tasks, subsequently failing to discover a goal-reaching policy. To address this, we introduce Efficient episodic Memory Utilization (EMU) for MARL, with two primary objectives: (a) accelerating reinforcement learning by leveraging semantically coherent memory from an episodic buffer and (b) selectively promoting desirable transitions to prevent local convergence. To achieve (a), EMU incorporates a trainable encoder/decoder structure alongside MARL, creating coherent memory embeddings that facilitate exploratory memory recall. To achieve (b), EMU introduces a novel reward structure called episodic incentive based on the desirability of states. This reward improves the TD target in Q-learning and acts as an additional incentive for desirable transitions. We provide theoretical support for the proposed incentive and demonstrate the effectiveness of EMU compared to conventional episodic control. The proposed method is evaluated in StarCraft II and Google Research Football, and empirical results indicate further performance improvement over state-of-the-art methods. Our code is available at: https://github.com/HyunghoNa/EMU.</description>
    <dc:date>2024-05-07T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

