File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

OakleyIan

Oakley, Ian
Interactions Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace DK -
dc.citation.conferencePlace Copenhagen -
dc.citation.endPage 531 -
dc.citation.startPage 519 -
dc.citation.title International Conference on Human-Computer Interaction -
dc.contributor.author Kim, H.M. -
dc.contributor.author Oakley, Ian -
dc.date.accessioned 2024-01-31T18:37:14Z -
dc.date.available 2024-01-31T18:37:14Z -
dc.date.created 2023-12-01 -
dc.date.issued 2023-07-23 -
dc.description.abstract Smartwatches now enable a wide range of end user applications. However, despite their increasingly sophisticated capabilities, the bandwidth of the user input that they support remains strongly limited by the small size of their touch screens. Numerous techniques have been proposed to improve this situation by integrating novel sensing systems and input modalities. A prominent approach here has been to detect gestures made by the hand wearing the watch with sensors capable of imaging either the associated distortions to the surface of the wrist, or changes to the wrist’s internal structures. While performance of such systems is promising (e.g., gesture accuracy of up to 93.3%), most studies currently examine performance during single studies and sessions. As such they fail to take account of the variability in measurement of wrist shapes and/or structures that might result from minor changes in sensor placement each time a device is donned. To explore the impact of this type of natural variability, we conducted a study using a watch strap prototype implementing infrared tomography to image the surface of the wrist during hand gesture production. While recognition performance during a single session of wearing this device was high (92.1%), it dropped substantially when the device was removed and re-worn between training and testing (to 22.9%). To alleviate this problem, we explore whether calibration processes that seek to maximize the consistency of sensor placements can yield improved performance. A study studies achieves this via IMU-based measurement of sensor placement similarity between sessions and shows greatly improved inter-session performance (up to 86.7%). Based on this result, we suggest that IMU based calibration of sensor placement can improve the real-world performance of gesture input systems based on wrist imaging techniques. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
dc.identifier.bibliographicCitation International Conference on Human-Computer Interaction, pp.519 - 531 -
dc.identifier.doi 10.1007/978-3-031-35596-7_33 -
dc.identifier.issn 0302-9743 -
dc.identifier.scopusid 2-s2.0-85171481766 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/74636 -
dc.language 영어 -
dc.publisher Springer Science and Business Media Deutschland GmbH -
dc.title Improving Hand Gesture Recognition via Infrared Tomography of the Wrist over Multiple Wearing Sessions -
dc.type Conference Paper -
dc.date.conferenceDate 2023-07-23 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.