<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholarworks.unist.ac.kr/handle/201301/83">
    <title>Repository Collection:</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/83</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91065" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91064" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91063" />
        <rdf:li rdf:resource="https://scholarworks.unist.ac.kr/handle/201301/91062" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-08T00:28:56Z</dc:date>
  </channel>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91065">
    <title>Collective Critics for Creative Story Generation</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91065</link>
    <description>Title: Collective Critics for Creative Story Generation
Author(s): Bae, Minwook
Abstract: Generating long, coherent narratives with several thousand words remains a difficult challenge for Large Language Models (LLMs). Prior studies have attempted to alleviate this issue by introducing frame- works that first construct a story plan and then generate the final story based on that plan. However, most of these approaches concentrate primarily on preserving narrative coherence, often neglecting two crucial elements for engaging storytelling: the creativity embedded in the planning process and the ex- pressiveness of the final narrative. In this work, we present CRITICS(Collective Critics for Creative Story Generation), a framework designed to address these limitations through a two-stage process: CR- PLAN for plan refinement and CRTEXT for story generation. Our framework incorporates a collective critique mechanism in which a group of LLM-based critics and a designated leader collaboratively refine both the plan and the story across multiple iterative rounds. Through extensive human evaluation, we show that CRITICS markedly improves story creativity and reader engagement, while still preserving strong narrative coherence. Moreover, because the framework is structured around a collaborative cri- tique workflow, human writers can seamlessly participate in any role—leader, critic, or author—enabling dynamic and interactive human–AI co-creation of long-form stories.
Major: Graduate School of Artificial Intelligence Artificial Intelligence</description>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91064">
    <title>A Cycle-Consistent Generative Model for Bidirectional Cross-Modal Translation between Industrial Tabular and Time-Series Data in an Ultraviolet Lamp Pinch-Sealing Process</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91064</link>
    <description>Title: A Cycle-Consistent Generative Model for Bidirectional Cross-Modal Translation between Industrial Tabular and Time-Series Data in an Ultraviolet Lamp Pinch-Sealing Process
Author(s): Choi, Jihyeok
Abstract: Quality inspection in the ultraviolet (UV)-lamp pinch-sealing process is challenging because defects can arise from subtle deviations in process conditions and transient equipment responses during sealing. Therefore, modern production lines collect tabular process variables together with time-series sensor signals, which enables multimodal learning for defect detection and predictive maintenance. However, in real production lines, missing modalities frequently occur due to sensor failures, logging errors, or misalignment between systems, which severely affect the applicability of multimodal learning. Existing cross-modal generative models mainly target semantically aligned pairs such as image-text or image-image pairs, and fail to capture the sequential structure and information asymmetry between tabular and time-series data in manufacturing. Therefore, this work proposes a practical cycle-consistent generative framework tailored to manufacturing environments that reconstructs missing tabular or time-series modalities and enables robust multimodal learning. First, a cycle-consistent generative adversarial network (CycleGAN)-based bidirectional translation model is designed between tabular data and time-series latent representations extracted by a convolutional autoencoder (CAE), enabling training with both paired and unpaired samples. Second, a set-guided mixture-of-experts (MoE) generator is introduced on the tabular side, clustering process conditions, and assigns specialized experts to improve the fidelity of generated tabular variables. Third, a process-aware discriminator is proposed to incorporate inter-modal correlations and process labels during training, thereby enabling the generator to produce samples consistent with underlying manufacturing conditions even when labels are unavailable at inference. Fourth, a cycle-consistent distillation (CyDi) module regularizes the tabular-to–time-series mapping using intermediate features from the time-series-to-tabular mapping, mitigating the information imbalance between modalities and enhancing time-series generation quality. The effectiveness of the proposed framework is validated on a real manufacturing dataset by training downstream defect classifiers on imputed multimodal data. Experimental results show that, when using the data generated by the proposed method, various multimodal learning models achieve average performance gains of 9.15 percentage points (pp) in the area under the precision–recall curve (AUPRC) and 9.83 pp in the F1 score. The proposed method can be flexibly applied even in settings with incomplete data and limited paired samples, and it is designed to remain applicable when labels are unavailable and some modalities are missing, indicating strong practical potential for deployment in real industrial environments.
Major: Graduate School of Artificial Intelligence Artificial Intelligence</description>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91063">
    <title>Robust Robot Task Planning Through Failure Detection from Multi-View Scene Graphs Haechan Chong Ulsan National Institute of Science and Technology</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91063</link>
    <description>Title: Robust Robot Task Planning Through Failure Detection from Multi-View Scene Graphs Haechan Chong Ulsan National Institute of Science and Technology
Author(s): Chong, Haechan
Abstract: The integration of Large Language Models (LLMs) and Vision-Language Models (VLMs) into robotic task planners for failure detection has shown considerable promise, primarily due to their advanced semantic reasoning. However, a significant limitation of these models is that they typically operate under the assumption of comprehensive environmental comprehension. This assumption proves problematic in complex scenarios where an explicit model of object relationships and scene structure is absent, often resulting in unreliable planning and execution. To address this deficiency, this research introduces a novel framework grounded in multi-view scene understanding. The proposed method starts by capturing comprehensive environmental data via multi-view images. From this visual input, local 2D scene graphs are generated, each encoding object identities and their spatial or semantic relations. Subsequently, a graph neural network model is employed to aggregate and merge these disparate local 2D scene graphs into a single cohesive unified scene graph. This graph serves as the central data structure for identifying the success of task execution and diagnosing the root causes of failures. The failure detection mechanism operates by comparing the generated unified scene graph against an expected scene graph from the LLM during the initial planning phase of each sub-task. Discrepancies between these two graphs are used to identify the failure reasoning. This diagnostic information is then transmitted to the LLM, which uses the feedback to generate an effective revised plan. This closed-loop process significantly enhances adaptability and mitigates the occurrence of repetitive execution errors. The efficacy and applicability of the proposed framework are validated through empirical evaluation on five real-world benchmark tasks. In addition, a comparative analysis of the failure detection and reasoning is conducted against current methods. The results demonstrate the superior performance of our approach, highlighting the distinct advantages of integrating multi-view perception with explicit graph-based relational reasoning.
Major: Graduate School of Artificial Intelligence Artificial Intelligence</description>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholarworks.unist.ac.kr/handle/201301/91062">
    <title>Temporal-Aware Synthetic Data Generation in Multi-Relational Tabular Data</title>
    <link>https://scholarworks.unist.ac.kr/handle/201301/91062</link>
    <description>Title: Temporal-Aware Synthetic Data Generation in Multi-Relational Tabular Data
Author(s): Jung, Yeseong
Abstract: Multi-relational time-series tabular data is widely used in fields such as e-commerce, finance, and healthcare. Recent advances in synthetic tabular data generation techniques have improved statistical similarity and relational fidelity, but previous methods struggle to preserve temporal transition patterns within child sequences. This paper proposes a framework that improves the temporal fidelity of multi-relational time-series tabular data synthesis by integrating a Temporal Representation Encoder (TRE) and a diffusion-based generative model. TRE uses a transformer to learn embeddings that capture both within-row field interactions and temporal dependencies in sequence through masked language modeling. By performing clustering in this temporal-aware embedding space, rather than in the raw feature space, we obtain cluster labels that meaningfully group sequences with similar temporal patterns. These cluster labels can be easily integrated into various diffusion models. Furthermore, we introduce new metrics for temporal evaluation, including comparisons of transition matrices using L1 distance and Jensen-Shannon divergence, and lag-k difference analysis of numerical features overlooked in previous work. We evaluate our approach by integrating TRE with two state-of-the-art diffusion models, ClavaDDPM and TabDiT. We conduct comprehensive experiments on two toy examples and three real-world datasets (Rossmann, Airbnb, and Walmart). On the Rossmann dataset, ClavaDDPM + TRE captures both short-term differences and weekly patterns better than the vanilla model. Similar improvements are observed across datasets. These findings demonstrate that clustering within the learned temporal embedding space provides a more effective conditioning mechanism for preserving sequential dynamics in synthetic multi-relational time-series tabular data, suggesting new directions for generating time-aware synthetic data.
Major: Graduate School of Artificial Intelligence Artificial Intelligence</description>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

