<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholarworks.unist.ac.kr/handle/201301/44">
    <title>Repository Collection:</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/44</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91691" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91620" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/90294" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/89489" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-13T10:46:03Z</dc:date>
  </channel>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91691">
    <title>SAIGE-GPU: accelerating genome- and phenome-wide association studies using GPUs</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91691</link>
    <description>Title: SAIGE-GPU: accelerating genome- and phenome-wide association studies using GPUs
Author(s): Rodriguez, Alex; Kim, Youngdae; Nandi, Tarak Nath; Keat, Karl; Kumar, Rachit; Conery, Mitchell; Bhukar, Rohan; Liu, Molei; Hessington, John; Maheshwari, Ketan; Begoli, Edmon; Tourassi, Georgia; Natarajan, Pradeep; Voight, Benjamin F.; Gaziano, John Michael; Damrauer, Scott M.; Liao, Katherine P.; Zhou, Wei; Huffman, Jennifer E.; Verma, Anurag; Madduri, Ravi K.
Abstract: Motivation Genome-wide association studies (GWAS) at biobank scale are computationally intensive, especially for admixed populations requiring robust statistical models. SAIGE is a widely used method for generalized linear mixed-model GWAS but is limited by its CPU-based implementation, making phenome-wide association studies impractical for many research groups.Results We developed SAIGE-GPU, a GPU-accelerated version of SAIGE that replaces CPU-intensive matrix operations with GPU-optimized kernels. The core innovation is distributing genetic relationship matrix calculations across GPUs and communication layers. Applied to 2068 phenotypes from 635 969 participants in the Million Veteran Program, including diverse and admixed populations, SAIGE-GPU achieved a 5-fold speedup in mixed model fitting on supercomputing infrastructure and cloud platforms. We further optimized the variant association testing step through multi-core and multi-trait parallelization. Deployed on Google Cloud Platform and Azure, the method provided substantial cost and time savings.Availability and implementation Source code and binaries are available for download at https://github.com/saigegit/SAIGE/tree/SAIGE-GPU-1.3.3. A code snapshot is archived at Zenodo for reproducibility (DOI: [10.5281/zenodo.17642591]). SAIGE-GPU is available in a containerized format for use across HPC and cloud environments and is implemented in R/C++ and runs on Linux systems.</description>
    <dc:date>2026-02-28T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91620">
    <title>Multi-Agent Reinforcement Learning Considering Agent Priority for Weapon-Target Assignment</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91620</link>
    <description>Title: Multi-Agent Reinforcement Learning Considering Agent Priority for Weapon-Target Assignment
Author(s): Na, Hyungho; Ahn, Jaemyung; Moon, Il-Chul
Abstract: This paper presents a novel multi-agent reinforcement learning (MARL) approach that incorporates agent priorities to address weapon-target assignment (WTA) with constraints, such as heterogeneous engagement time windows. The proposed approach begins by defining the decentralized Markov decision process (Dec-MDP) formulation for WTA involving heterogeneous, multiple agents. Our approach employs a hierarchical structure for MARL training, comprising an agent selector and a target selector, which sequentially determine the order of agents for assignment, i.e., preferred shooter selection and target selection. Through experimental designs, the proposed model demonstrates its ability to generate high-quality assignment plans within a short execution time. The model demonstrates superior performance across various scenarios, achieving the lowest threat survivability with a clear advantage over other baseline methods, especially in tightly constrained scenarios. Ablation studies and qualitative analyses are conducted to illustrate the influence of key components on performance, and these qualitative studies reveal the learning mechanism in agent and target selection. Additionally, transferability tests confirm the model's applicability to unseen problem cases, where training and testing environments are different, indicating its potential for real-world adaptation in various scenarios.</description>
    <dc:date>2026-03-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/90294">
    <title>Predicting non-recurrent congestion impact: A pattern-based approach for speed drop ratio prediction using weighted K-nearest neighbors</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/90294</link>
    <description>Title: Predicting non-recurrent congestion impact: A pattern-based approach for speed drop ratio prediction using weighted K-nearest neighbors
Author(s): Oh, YongKyung; Kwak, Jiin; Kim, Sungil
Abstract: Traffic congestion remains a major challenge in developed countries, disrupting mobility and affecting economic and social activities. Among its various types, non-recurrent congestion - caused by unexpected events such as accidents, maintenance, or debris - remains difficult to predict due to its irregular spatiotemporal dynamics. While existing models effectively forecast recurrent traffic, they are less applicable to non-recurrent events characterized by abrupt and anomalous patterns. This study presents a pattern-based framework that integrates the weighted K-nearest neighbor (WK-NN) algorithm with dynamic time warping (DTW) for similarity-based prediction of non-recurrent congestion impact. The framework estimates speed drop ratios (SDRs) and propagates the predicted effects to neighboring road segments, enabling a network-level assessment of disruption. By identifying historical patterns most similar to the current incident, the proposed approach enhances interpretability and traceability for operational use. We evaluate the method using 2780 real-world traffic incident records combining data from the Korean National Police Agency and NAVER Corporation. Experimental results demonstrate that the proposed framework achieves consistent and competitive performance compared with benchmark machine learning and deep learning models. These findings suggest the framework's potential for supporting practical decision-making in traffic control centers through timely and interpretable congestion impact forecasts.</description>
    <dc:date>2026-02-28T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/89489">
    <title>Point-ITR: Task-Oriented Importance Sampling for Large-Scale 3D Point Clouds in Manufacturing</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/89489</link>
    <description>Title: Point-ITR: Task-Oriented Importance Sampling for Large-Scale 3D Point Clouds in Manufacturing
Author(s): Ma, Yichen; Biehler, Michael; Lim, Chiehyeon; Shi, Jianjun
Abstract: The increasing adoption of advanced three-dimensional (3D) scanning technologies has made large-scale point clouds containing millions of 3D measurement points standard in applications like manufacturing. However, processing immense amounts of 3D data imposes significant computational loads, often resulting in discarded critical information and suboptimal outcomes for downstream tasks. This paper introduces Point-ITR, a task-oriented sampling method tailored for regression tasks, which selectively retains the most informative points within large-scale point clouds. Specifically, we propose a gradient-based importance sampling framework for intra-sample selection (selecting points within a 3D point cloud) and a feature-based weighting scheme for inter-sample selection (selecting among different 3D point cloud sub-samples). Additionally, we introduce an iterative random sampling (ItrRS) module for preprocessing and an Offset Residual Block that utilizes a reference design model to learn structural features and accelerate both training and testing, which allows a simple fully connected network to process large-scale point clouds. Our approach improves prediction accuracy across downstream tasks while ensuring that the rich details captured are fully utilized for interpretation, offering a more effective and efficient solution. We validate our methodology through simulation studies and real-world case applications in additive manufacturing, demonstrating its robustness and practical applicability.  © 2004-2012 IEEE.</description>
    <dc:date>2025-11-30T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

