IEEE International Conference on Robotics and Automation
Abstract
Touch is a fundamental modality for conveying emotions and intentions in Human–Robot Interaction. However, conventional approaches to touch pattern recognition often lack robustness to inter-user variability, whereas alternative solutions are frequently bulky or costly. This study proposes a novel feature extraction framework for touch pattern recognition, which adapts MFCC from speech processing to capacitive touch signals. The proposed method preserves the strengths of MFCC—dimensionality reduction and noise robustness—while addressing the physical differences between audio and touch signals by introducing a new frequency reference axis in place of the conventional Mel scale. To evaluate its effectiveness, a representative set of social touch patterns, including gestures traditionally difficult to classify, was defined and analyzed. The proposed framework ensures stable recognition across diverse users while reducing feature dimensionality for efficient operation in lightweight models. This efficiency highlights its suitability for real-time robotic interfaces