<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholarworks.unist.ac.kr/handle/201301/44">
    <title>Repository Collection:</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/44</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/90294" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/89489" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/89433" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/89386" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-08T21:20:11Z</dc:date>
  </channel>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/90294">
    <title>Predicting non-recurrent congestion impact: A pattern-based approach for speed drop ratio prediction using weighted K-nearest neighbors</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/90294</link>
    <description>Title: Predicting non-recurrent congestion impact: A pattern-based approach for speed drop ratio prediction using weighted K-nearest neighbors
Author(s): Oh, YongKyung; Kwak, Jiin; Kim, Sungil
Abstract: Traffic congestion remains a major challenge in developed countries, disrupting mobility and affecting economic and social activities. Among its various types, non-recurrent congestion - caused by unexpected events such as accidents, maintenance, or debris - remains difficult to predict due to its irregular spatiotemporal dynamics. While existing models effectively forecast recurrent traffic, they are less applicable to non-recurrent events characterized by abrupt and anomalous patterns. This study presents a pattern-based framework that integrates the weighted K-nearest neighbor (WK-NN) algorithm with dynamic time warping (DTW) for similarity-based prediction of non-recurrent congestion impact. The framework estimates speed drop ratios (SDRs) and propagates the predicted effects to neighboring road segments, enabling a network-level assessment of disruption. By identifying historical patterns most similar to the current incident, the proposed approach enhances interpretability and traceability for operational use. We evaluate the method using 2780 real-world traffic incident records combining data from the Korean National Police Agency and NAVER Corporation. Experimental results demonstrate that the proposed framework achieves consistent and competitive performance compared with benchmark machine learning and deep learning models. These findings suggest the framework's potential for supporting practical decision-making in traffic control centers through timely and interpretable congestion impact forecasts.</description>
    <dc:date>2026-02-28T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/89489">
    <title>Point-ITR: Task-Oriented Importance Sampling for Large-Scale 3D Point Clouds in Manufacturing</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/89489</link>
    <description>Title: Point-ITR: Task-Oriented Importance Sampling for Large-Scale 3D Point Clouds in Manufacturing
Author(s): Ma, Yichen; Biehler, Michael; Lim, Chiehyeon; Shi, Jianjun
Abstract: The increasing adoption of advanced three-dimensional (3D) scanning technologies has made large-scale point clouds containing millions of 3D measurement points standard in applications like manufacturing. However, processing immense amounts of 3D data imposes significant computational loads, often resulting in discarded critical information and suboptimal outcomes for downstream tasks. This paper introduces Point-ITR, a task-oriented sampling method tailored for regression tasks, which selectively retains the most informative points within large-scale point clouds. Specifically, we propose a gradient-based importance sampling framework for intra-sample selection (selecting points within a 3D point cloud) and a feature-based weighting scheme for inter-sample selection (selecting among different 3D point cloud sub-samples). Additionally, we introduce an iterative random sampling (ItrRS) module for preprocessing and an Offset Residual Block that utilizes a reference design model to learn structural features and accelerate both training and testing, which allows a simple fully connected network to process large-scale point clouds. Our approach improves prediction accuracy across downstream tasks while ensuring that the rich details captured are fully utilized for interpretation, offering a more effective and efficient solution. We validate our methodology through simulation studies and real-world case applications in additive manufacturing, demonstrating its robustness and practical applicability.  © 2004-2012 IEEE.</description>
    <dc:date>2025-11-30T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/89433">
    <title>Customer-centric service benchmarking using online reviews: a case study of Bangkok hotels</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/89433</link>
    <description>Title: Customer-centric service benchmarking using online reviews: a case study of Bangkok hotels
Author(s): Kim, Juram; Lim, Chiehyeon
Abstract: Benchmarking in service industries has received considerable attention; however, traditional approaches predominantly rely on customer surveys and financial or operational metrics. These methods are often time-consuming, resource-intensive, and limited in capturing unexpected yet impactful service attributes derived from actual customer experience. To address these limitations, this study proposes a novel customer-centric, data-driven framework that transforms the traditionally qualitative and manual process of competitor definition, performance diagnosis, and priority setting into an objective, systematic pipeline using large-scale online customer reviews. The framework consists of four key components: (1) topic modeling to identify service attributes from unstructured review texts, (2) index and sentiment analysis to assess the importance of each attribute and evaluate performance, (3) k-means clustering and TOPSIS to identify relevant competitors and best practices, and (4) importance-performance competitor analysis (IPCA) to develop targeted strategic actions. A case study using 26,934 reviews from 26 hotels in Bangkok demonstrates the practical utility and scalability of the proposed framework. This research contributes to marketing analytics by offering a systematic, customer-perception-driven alternative to traditional benchmarking, supporting continuous service improvement and competitive positioning in dynamic digital markets, as demonstrated through its application to Bangkok’s hotel industry. © The Author(s), under exclusive licence to Springer Nature Limited 2025.</description>
    <dc:date>2025-08-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/89386">
    <title>Clustering and Similarity Learning in Financial Markets: A Tutorial for the Practitioners</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/89386</link>
    <description>Title: Clustering and Similarity Learning in Financial Markets: A Tutorial for the Practitioners
Author(s): Mehta, Dhagash; Thompson, John R. J.; Lee, Hoyoung; Lee, Yongjae
Abstract: Clustering and similarity learning are increasingly indispensable for structuring heterogeneous financial data and supporting real-world decision-making. Traditional heuristics such as industry codes, static style boxes, or return correlations offer only coarse and rigid notions of peer groups. Recent advances in metric learning, graph methods, and large language models now make it possible to build adaptive neighborhoods of securities, funds, companies, and investors that align more closely with actual risk, liquidity, and thematic exposures. This tutorial synthesizes these methodological developments and demonstrates their use across major asset classes. Case studies show how supervised proximities improve bond substitution, how fund similarity systems reconcile category reproducibility with outlier detection, how multimodal pipelines refine company comparables for valuation and strategy, and how investor clustering enhances personalization and “know your client” (KYC) analytics. We emphasize modeling choices that make clustering and similarity auditable and robust under regime shifts. We also outline their evaluation protocols such as neighborhood stability, substitution fidelity, and segment utility, and so on, which align with investment, compliance, and fiduciary objectives. Overall, the central message for practitioners is pragmatic: Similarity systems have moved beyond experimental prototypes and now stand as deployable techniques within real investment workflows.</description>
    <dc:date>2025-10-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

