File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.contributor.advisor Lee, Hui Sung -
dc.contributor.author Kim, Ji Soo -
dc.date.accessioned 2026-04-23T17:48:42Z -
dc.date.available 2026-04-23T17:48:42Z -
dc.date.issued 2026-02 -
dc.description.abstract This study addresses the design of a touch-based interface for emotional interaction in social robots, and proposes an on-device AI gesture recognition framework capable of extracting stable pattern features from touch data and processing them in real time within a resource-constrained embedded environment. Touch interaction serves as an intuitive and powerful modality for con- veying emotion and intention in human–robot interaction. However, existing touch recognition approaches often require expensive or complex hardware configurations, and the sensor signals tend to vary significantly depending on user characteristics and contact conditions, making con- sistent pattern recognition and generalization difficult. To overcome these limitations, this study introduces an MFCC-inspired feature extraction method that reinterprets and reconstructs the MFCC technique, widely established in speech recognition, to fit the spectral characteristics of capacitive touch signals. Parameters such as reference frequency, number of filter banks, and number of coefficients are optimized to reflect the low-frequency characteristics of signal data patterns based on a collected touch interaction dataset. This approach reduces variability due to individual differences, such as hand size, skin moisture, and habitual touch behavior, and enables more stable representation of semantic distinctions between touch gestures. Furthermore, this study presents a holistic design methodology integrating a low-cost, low- complexity hardware architecture based on a single-ended capacitive touch sensor, a data man- agement and model training/validation pipeline, and an on-device MLP classifier deployed on an STM32 MCU. The proposed framework was applied to the social robot PO-ME and evalu- ated using data collected from 15 child participants in a real experimental environment. Under conditions involving separation of training and unseen users, as well as LOSO-CV, the sys- tem demonstrated superior classification performance and generalization compared to related prior work. Notably, the proposed approach achieved comparable or higher performance de- spite substantially lower hardware cost compared to studies employing complex modular sensor structures, confirming its practical design efficiency. In conclusion, this study experimentally demonstrates that robust touch sensing feature rep- resentation transcending user variability is achievable even with capacitive touch interfaces, and presents an integrated design direction encompassing sensor structure, feature extraction, and model deployment for implementing affective touch interaction in social robots. This work pro- vides a foundation for the practical expansion of lightweight emotional interaction technologies in social robots. -
dc.description.degree Master -
dc.description Department of Design -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/91516 -
dc.identifier.uri http://unist.dcollection.net/common/orgView/200000965703 -
dc.language ENG -
dc.publisher Ulsan National Institute of Science and Technology -
dc.subject Cementless, geopolymer, by-product -
dc.title MFCCs-Inspired Feature Extraction for the Design of Capacitive Touch Interfaces in Social Robots -
dc.type Thesis -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.