The rapid advancement of smartwatches has transformed them from simple wearable devices into pow- erful tools capable of supporting a wide range of applications, including communication, health monitor- ing, and entertainment. However, their small screen size significantly limits user interaction, requiring innovative input techniques that can overcome these constraints. This thesis introduces novel input sys- tems for smartwatches, including hybrid approaches combining tilt panning and offset sensing, and in- frared tomography, which uniquely address usability issues in mobile and multi-session contexts. These contributions go beyond prior studies by focusing on real-world applicability. The research addresses key challenges in improving input expressiveness, usability, and consistency, particularly in mobile con- texts and across multiple wearing sessions. In chapter III, the first study [1] evaluates the performance of tilt panning and offset sensing input techniques, designed to bypass the limitations of touchscreen interactions. Tilt panning allows users to control the smartwatch interface through wrist tilts, while offset sensing uses touch inputs on the watch’s edge to manipulate the interface. A mobility evaluation was conducted to assess these techniques under both standing and walking conditions. The findings reveal that while tilt panning offers faster input times, it suffers from high error rates (23.89% to 34.22%) during walking due to the instability of wrist movements. Conversely, offset sensing reduces error rates but results in prolonged target selection times (>1000ms). To address these limitations, a novel hybrid input technique was proposed, combining edge touch input for stabilization with tilt panning for control and selection. This hybrid approach significantly improved target selection times to under 800ms and lowered error rates to 10.2%, making it a more viable option for smartwatch interaction in dynamic, real-world conditions. In chapter IV, the second study [2] investigates the application of infrared tomography for hand gesture recognition, leveraging the ability of IR sensors to detect subtle changes in the wrist’s surface structure. This technique enables single-handed gesture input without the need for a touchscreen, of- fering a more expressive interaction modality. While within-session gesture recognition accuracy was high (up to 92.1%), the performance dropped sharply (to 22.9%) when the device was removed and re-worn in different sessions, primarily due to variability in sensor placement. To overcome this chal- lenge, an IMU-based calibration process was introduced, allowing for real-time alignment of sensor positions across multiple wearing sessions. This calibration process restored gesture recognition ac- curacy to 86.7%, highlighting the importance of maintaining consistent sensor placement for reliable performance in multi-session usage. The study demonstrates the potential of IR-based input systems to provide accurate, single-handed gesture recognition on smartwatches, especially when supported by calibration techniques that account for sensor variability. Together, these studies present a comprehensive framework for improving smartwatch interaction through multi-modal input systems. By combining motion-based, touch-based, and infrared-based tech- niques, the research significantly advances the capabilities of smartwatches to support expressive, effi- cient, and consistent input in both mobile and static conditions. The findings contribute to the devel- opment of next-generation wearable devices that can better support single-handed interactions, improve usability across varying contexts, and maintain high performance even when subjected to real-world variability, such as sensor misalignment and mobility. This thesis provides valuable insights for designing future smartwatch interfaces that are more in- tuitive, accessible, and adaptable to the dynamic nature of everyday use. By addressing the limitations of current interaction techniques, the research opens new pathways for gesture recognition systems, multi-modal input frameworks, and context-aware wearable interfaces, ultimately enhancing the user experience and broadening the applicability of smartwatches across different domains.
Publisher
Ulsan National Institute of Science and Technology
Degree
Doctor
Major
Department of Biomedical Engineering (Human Factors Engineering)