| dc.description.abstract |
Head-mounted displays (HMDs) are increasingly used in various settings including users’ hands are occupied, yet hands-free interaction remains both fragile under mobility and limited in expressiveness. Unimodal head and eye pointing degrade during walking, and common point-and-dwell techniques sup- port only a narrow command vocabulary. This thesis investigates how combining head and eye into a multimodal input channel can improve robustness during walking while extending hands-free input beyond point-and-select. Across the research questions, the thesis makes three primary contributions. First, it empirically characterizes mobility-induced degradation in unimodal head and eye pointing on an AR HMD, show- ing that walking substantially increases errors, selection times, and workload relative to stationary use, and establishing the severity and structure of the problem that robust hands-free techniques must ad- dress. Second, it introduces StabilizAR, an implicit head–eye technique that stabilizes head-controlled pointing during walking by conditioning control on gaze information through velocity capping and tem- poral evidence accumulation. In the controlled evaluation , StabilizAR increased selection success from approximately 6% with a stock head cursor to over 90%, while also improving speed and reducing work- load. In a realistic deployment within a short-form video browsing application used on a building-scale walking route, StabilizAR preserved high performance and enabled users to maintain natural walking behavior, including fewer stops and lower exertion. Third, it develops an explicit multimodal technique that separates gaze-based target activation from head-gesture confirmation, demonstrating that gaze- activated head gestures can provide multiple directional commands per target with practical speed and comfort when trigger and threshold parameters are selected to match application-level error tolerance and false-activation risk. Collectively, the thesis shows that implicit head–eye integration can convert fragile head-only point- ing into robust hands-free selection during mobility, and that explicit gaze-plus-head integration provides a practical mechanism for expressive hands-free command input in everyday HMD use. These findings yield generalizable design principles for future XR systems that treat natural head–eye coordination as an intent signal and allocate complementary roles to gaze and head movement to balance responsiveness, robustness, and safety. |
- |
| dc.subject |
Suspended structure, Carbon-MEMS, Thermal conductivity detector, Gas sensor, IR sensor, Carbon backbone, Thermopile, Carbon IR absorber |
- |