File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.contributor.advisor Lee, Kyungho -
dc.contributor.author Shin, Yonghwan -
dc.date.accessioned 2026-03-26T22:13:21Z -
dc.date.available 2026-03-26T22:13:21Z -
dc.date.issued 2026-02 -
dc.description.abstract Head-mounted displays (HMDs) are increasingly used in various settings including users’ hands are occupied, yet hands-free interaction remains both fragile under mobility and limited in expressiveness. Unimodal head and eye pointing degrade during walking, and common point-and-dwell techniques sup- port only a narrow command vocabulary. This thesis investigates how combining head and eye into a multimodal input channel can improve robustness during walking while extending hands-free input beyond point-and-select. Across the research questions, the thesis makes three primary contributions. First, it empirically characterizes mobility-induced degradation in unimodal head and eye pointing on an AR HMD, show- ing that walking substantially increases errors, selection times, and workload relative to stationary use, and establishing the severity and structure of the problem that robust hands-free techniques must ad- dress. Second, it introduces StabilizAR, an implicit head–eye technique that stabilizes head-controlled pointing during walking by conditioning control on gaze information through velocity capping and tem- poral evidence accumulation. In the controlled evaluation , StabilizAR increased selection success from approximately 6% with a stock head cursor to over 90%, while also improving speed and reducing work- load. In a realistic deployment within a short-form video browsing application used on a building-scale walking route, StabilizAR preserved high performance and enabled users to maintain natural walking behavior, including fewer stops and lower exertion. Third, it develops an explicit multimodal technique that separates gaze-based target activation from head-gesture confirmation, demonstrating that gaze- activated head gestures can provide multiple directional commands per target with practical speed and comfort when trigger and threshold parameters are selected to match application-level error tolerance and false-activation risk. Collectively, the thesis shows that implicit head–eye integration can convert fragile head-only point- ing into robust hands-free selection during mobility, and that explicit gaze-plus-head integration provides a practical mechanism for expressive hands-free command input in everyday HMD use. These findings yield generalizable design principles for future XR systems that treat natural head–eye coordination as an intent signal and allocate complementary roles to gaze and head movement to balance responsiveness, robustness, and safety. -
dc.description.degree Doctor -
dc.description Department of Biomedical Engineering -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/90893 -
dc.identifier.uri http://unist.dcollection.net/common/orgView/200000965863 -
dc.language ENG -
dc.publisher Ulsan National Institute of Science and Technology -
dc.subject Suspended structure, Carbon-MEMS, Thermal conductivity detector, Gas sensor, IR sensor, Carbon backbone, Thermopile, Carbon IR absorber -
dc.title Bridging Gaze and Head: Multimodal Interaction Techniques for HMD -
dc.type Thesis -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.