File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Emotion Engine with Dynamic Characteristic Changes for Multimodal Emotion Expression in Social Robots

Author(s)
Park, Haeun
Advisor
Lee, Hui Sung
Issued Date
2025-08
URI
https://scholarworks.unist.ac.kr/handle/201301/88313 http://unist.dcollection.net/common/orgView/200000903277
Abstract
This thesis proposes and validates a novel multimodal emotion engine that enables social robots to express emotions dynamically and adaptively through facial expressions, motion, and sound. Traditional emotion expression systems in social robots often rely on rule-based logic and discrete transitions, limiting their ability to reflect the fluid, context-sensitive nature of human emotional expression. To overcome these limitations, the proposed system integrates a continuous affective state model grounded in a Linear Dynamic Affect-Expression Model (LDAEM), allowing robots to respond to stimuli from sensors, temporally sensitive expressions. The engine incorporates multiple sensory inputs---including facial recognition and touch---and transforms them into a dynamic internal emotion vector within a defined affective space. This emotion vector drives multimodal outputs in real time through three channels: (1) facial expression using customizable control points (CPs) that were co-designed by users; (2) motion trajectories derived from user-demonstrated miniature manipulations to reflect expected emotional gestures; and (3) synthesized sound patterns mimicking emotional vocalizations, recorded and processed using Sonic Pi. These multimodal expressions are modulated by dynamic parameters such as damping ratio, which governs expressiveness and liveliness of the output. A series of three structured user studies were conducted to investigate key research questions (RQs). For RQ1, which concerns how the system resolves conflicting emotional stimuli across modalities (e.g., positive touch versus negative facial cues), the findings reveal that users tend to perceive robot emotions as more aligned with negative stimuli. This suggests that priority weighting is applied to negative stimuli in the emotion integration process. For RQ2, the system's dynamic characteristics were modulated using various damping ratios to assess perceived liveliness and naturalness. Results indicate that while increased dynamic expressiveness enhances liveliness, naturalness peaks at moderate dynamic levels across most emotions, with surprise being an exception. Each emotion exhibited distinct optimal dynamic settings, emphasizing the need for emotion-specific modulation strategies. RQ3 explored the smoothness and believability of emotion transitions. A direct transition model (Proposed Model) that connects current emotional CPs to the next emotional state was compared against a state-reset model (Baseline Model), which first returns the robot to a neutral expression before transitioning to a new emotion. Participants overwhelmingly rated the proposed Model as more natural, continuous, and emotionally believable, supporting its effectiveness in real-time interactions involving abrupt emotional shifts. The proposed emotion engine advances the state of the art in social robot design by enabling expressive, context-sensitive, and user-personalized emotion expression. Through multimodal fusion, dynamic control, and user-in-the-loop customization, this research provides a robust framework for designing emotionally responsive robots that can adaptively engage in natural human-robot interaction.
Publisher
Ulsan National Institute of Science and Technology
Degree
Doctor
Major
Graduate School of Creative Design Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.