<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection:</title>
  <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/35" />
  <subtitle />
  <id>https://scholarworks.unist.ac.kr/handle/201301/35</id>
  <updated>2026-05-13T07:06:58Z</updated>
  <dc:date>2026-05-13T07:06:58Z</dc:date>
  <entry>
    <title>Augmentiary: Exploring LLM-Based Interpretive Support for Meaning-Making in Reflective Journaling</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91518" />
    <author>
      <name>Hwang, Seoyeong</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91518</id>
    <updated>2026-04-23T08:48:43Z</updated>
    <published>2026-01-31T15:00:00Z</published>
    <summary type="text">Title: Augmentiary: Exploring LLM-Based Interpretive Support for Meaning-Making in Reflective Journaling
Author(s): Hwang, Seoyeong
Abstract: Journaling is a powerful tool for meaning-making, enabling individuals to interpret life experiences and construct personal narratives. However, this reflective process is cognitively demanding and often difficult to sustain without support. While Large Language Models (LLMs) show promise as writing partners, current human-AI co-writing paradigms primarily focus on productivity and fluency. Applying these productivity-oriented models to intimate, autobiographical writing creates a critical tension: how to provide interpretive support that deepens reflection without displacing the writer’s authentic voice or undermining their interpretive agency.

This thesis presents Augmentiary, an AI-augmented journaling system that embeds interpretive support directly into users’ diary entries. Drawing on a three-day formative probe with eight experienced diarists, the study derives four design goals for LLM-based meaning-making support: providing alternative interpretations for comparison, preserving the writer’s voice, supporting writer-led engagement, and attuning interpretations to personal values and autobiographical continuity. These goals are instantiated in a web-based system that offers on-demand, sentence-level, deliberately incomplete suggestions through two features—Perspective-Expanding and Dot-Connecting—and visual attribution cues whose prominence fades as users edit AI-generated text.

To examine how such interpretive support is experienced in practice, a four-week field deployment is conducted with 25 participants undergoing diverse life transitions. With semi-structured interviews, the study applies inductive, constructionist thematic analysis to characterize how Augmentiary shapes journaling practices, reflection processes, and perceived agency.

Findings show that participants did not treat AI's interpretive suggestions as authoritative answers but as comparison points in an inner dialogue, using both agreement and resistance to clarify what their experiences meant. Interpretive support helped participants elaborate on their narratives, revisit past experiences, and act on emerging insights; however, repetitive or misaligned suggestions, as well as the effort required to elaborate on them, sometimes constrained reflection. 

This thesis contributes: (1) a conceptual framing of LLMs in reflective journaling as inner dialogic collaborators rather than productivity-oriented co-authors; (2) the design and implementation of Augmentiary, which applied four design goals for interpretive support in reflective journaling for meaning-making; (3) empirical insights into how people accept, appropriate, and resist AI's interpretive suggestions in everyday journaling; and (4) design implications for AI-mediated meaning-making systems that emphasize collaboration for catalyzing inner dialogue, attribution of AI-authored text in personal writing, interface design that leaves room for users' reflective engagement on AI-generated output, and the importance of consideration of human-AI role of interpretive agency.
Major: Department of Design</summary>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Mellow: A Tangible Music Interface Exploring Interaction Elements for Engagement in Music Editing</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91517" />
    <author>
      <name>Kim, Minji</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91517</id>
    <updated>2026-04-23T08:48:43Z</updated>
    <published>2026-01-31T15:00:00Z</published>
    <summary type="text">Title: Mellow: A Tangible Music Interface Exploring Interaction Elements for Engagement in Music Editing
Author(s): Kim, Minji
Abstract: Music listening has increasingly become a passive experience, shaped by algorithm-driven recommendation systems and one-shot AI generation tools that deliver completed outcomes with limited user intervention. Although recent AI-based music systems have reduced technical barriers, they often rely on linguistic prompts, offer limited real-time controllability, and undermine users’ sense of authorship. As a result, many music listeners remain disengaged from the process of actively editing or shaping music. 
This study explores how engagement in music editing can be shaped at the interaction level through tangible user interface (TUI) elements, focusing on embodied participation rather than algorithmic personalization. I present Mellow, a tangible music interface designed to investigate how a set of interaction elements—including tactile input, material properties, and light-based visual feedback— work together to support active participation in music editing while listening. By integrating pressure- based interaction with deformable materials and ambient light feedback into a real-time music editing workflow, Mellow enables users to explore musical aspects such as rhythm, spatiality, and texture without relying on formal musical training or complex graphical interfaces. 
To inform the design, I conducted a preliminary qualitative study with everyday music listeners to identify limitations in existing AI-based music systems and to understand factors influencing engagement during music interaction. Based on these insights, I iteratively prototyped Mellow and conducted a controlled user study evaluating four surface materials with distinct physical properties. 
The findings indicate that interaction elements embedded within the tangible interface play a central role in shaping engagement by mediating users’ sense of intervention and control during music editing. In particular, the combination of low-viscosity, moderately elastic materials and immediate visual feedback significantly improved perceived suitability, comfort, enjoyment, and consistency across users. Excessive physical resistance disrupted continuous engagement, whereas appropriately tuned tactile and visual responses supported transparent and sustained music editing experiences. 
This study contributes (1) an exploration of tactile and light-based interaction elements that support engagement in music editing, (2) a tangible music interface case study demonstrating how tactile and light-based feedback function as complementary TUI elements, and (3) empirical design guidelines for integrating materiality and visual feedback in deformable music interfaces.
Major: Department of Design</summary>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>MFCCs-Inspired Feature Extraction for the Design of Capacitive Touch Interfaces in Social Robots</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91516" />
    <author>
      <name>Kim, Ji Soo</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91516</id>
    <updated>2026-04-23T08:48:42Z</updated>
    <published>2026-01-31T15:00:00Z</published>
    <summary type="text">Title: MFCCs-Inspired Feature Extraction for the Design of Capacitive Touch Interfaces in Social Robots
Author(s): Kim, Ji Soo
Abstract: This study addresses the design of a touch-based interface for emotional interaction in social robots, and proposes an on-device AI gesture recognition framework capable of extracting stable pattern features from touch data and processing them in real time within a resource-constrained embedded environment. Touch interaction serves as an intuitive and powerful modality for con- veying emotion and intention in human–robot interaction. However, existing touch recognition approaches often require expensive or complex hardware configurations, and the sensor signals tend to vary significantly depending on user characteristics and contact conditions, making con- sistent pattern recognition and generalization difficult. To overcome these limitations, this study introduces an MFCC-inspired feature extraction method that reinterprets and reconstructs the MFCC technique, widely established in speech recognition, to fit the spectral characteristics of capacitive touch signals. Parameters such as reference frequency, number of filter banks, and number of coefficients are optimized to reflect the low-frequency characteristics of signal data patterns based on a collected touch interaction dataset. This approach reduces variability due to individual differences, such as hand size, skin moisture, and habitual touch behavior, and enables more stable representation of semantic distinctions between touch gestures. Furthermore, this study presents a holistic design methodology integrating a low-cost, low- complexity hardware architecture based on a single-ended capacitive touch sensor, a data man- agement and model training/validation pipeline, and an on-device MLP classifier deployed on an STM32 MCU. The proposed framework was applied to the social robot PO-ME and evalu- ated using data collected from 15 child participants in a real experimental environment. Under conditions involving separation of training and unseen users, as well as LOSO-CV, the sys- tem demonstrated superior classification performance and generalization compared to related prior work. Notably, the proposed approach achieved comparable or higher performance de- spite substantially lower hardware cost compared to studies employing complex modular sensor structures, confirming its practical design efficiency. In conclusion, this study experimentally demonstrates that robust touch sensing feature rep- resentation transcending user variability is achievable even with capacitive touch interfaces, and presents an integrated design direction encompassing sensor structure, feature extraction, and model deployment for implementing affective touch interaction in social robots. This work pro- vides a foundation for the practical expansion of lightweight emotional interaction technologies in social robots.
Major: Department of Design</summary>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Exploring the Changes of Children’s Reading Activity in Repeated Non-verbal Interaction with a Dog-type Social Robot Minjae Sung Ulsan National Institute of Science and Technology</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91515" />
    <author>
      <name>Sung, Minjae</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91515</id>
    <updated>2026-04-23T08:48:41Z</updated>
    <published>2026-01-31T15:00:00Z</published>
    <summary type="text">Title: Exploring the Changes of Children’s Reading Activity in Repeated Non-verbal Interaction with a Dog-type Social Robot Minjae Sung Ulsan National Institute of Science and Technology
Author(s): Sung, Minjae
Abstract: The development of cognitive and emotional abilities in children is significantly influenced by reading activities. Designing effective reading experiences is a critical aspect of child development, and parents and instructors emphasize the importance of fostering autonomous reading habits. However, sustaining self-directed reading without adult intervention remains a pedagogical challenge. Conventional reading programs often lack engaging stimuli and rely heavily on adult scaffolding, which may inadvertently increase children’s performance pressure and reading anxiety. To address these limitations, recent ap- proaches such as animal-assisted ‘Read-to-Dog’ programs and storytelling (content-based) robots have been proposed. Yet, these methods present practical barriers: animal-assisted programs face hygiene, safety, and management issues in public environments, while content-based robots depend on scripted media that require continuous updates and lack flexibility across ages and themes. This study proposes an alternative approach using a non-verbal, content-independent dog-type social robot, PO-ME, designed to support children’s self-directed reading through expressive and responsive non-verbal feedback. The robot communicates emotional expressions solely through gaze and head and tail movement, functioning as a non-judgmental listener that provides emotional comfort without lin- guistic or evaluative feedback. A six-week field study was conducted with children aged 6–8 in a public library, using a repeated-measures design. Each participant alternated between reading-with-robot and reading-alone sessions across six weekly visits. Quantitative measures included surveys on reading anx- iety, interest, and autonomy, while qualitative data consisted of behavioral observations (gaze, touch, facial expressions) and interviews with children, instructors, and librarians. Results showed that in the early sessions, children’s reported interest increased immediately after reading-with-robot sessions, and reading anxiety tended to decrease compared to reading alone, though longitudinal trends were not statistically significant. Individual differences were notable in sustained engagement and intrinsic motivation. Interviews revealed that most children perceived the robot as a comforting presence and expressed increased willingness to read voluntarily. In contrast, instructors highlighted the limitation of purely non-verbal feedback, emphasizing the necessity of verbal encour- agement and adult mediation for long-term habit formation. Nevertheless, librarians and instructors generally agreed on the robot’s potential as a supplementary educational tool rather than a standalone facilitator. Overall, the findings demonstrate that non-verbal robot companionship can function as a low-pressure partner that stabilizes children’s reading focus and alleviates anxiety. The study provides empirical ev- idence and design guidelines for integrating non-verbal social robots into public reading environments, emphasizing the need for multimodal feedback and human–robot collaboration to sustain motivation and emotional safety in children’s autonomous reading.
Major: Department of Design</summary>
    <dc:date>2026-01-31T15:00:00Z</dc:date>
  </entry>
</feed>

