File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Intention estimation of pick-and-place actions using gaze and motion features for human-robot interaction

Author(s)
Ju, Dawon
Advisor
Bae, Joonbum
Issued Date
2024-02
URI
https://scholarworks.unist.ac.kr/handle/201301/81996 http://unist.dcollection.net/common/orgView/200000744095
Abstract
Human-Robot Interaction (HRI) is considered a crucial technology for increasing productivity in modern society. In order for effective interaction between humans and robots, human activity recognition (HAR) aims to better understand and comprehend human behavior through the integration of detection and inference. For robots to comprehend and interact with human movements effectively, the recognition of pick- and-place actions holds significant importance. These two actions consistently occur at the beginning and end when a person interacts with a target object. In "pick," the emphasis is on the stable grasping of an object, while in "place," more variables need consideration to securely position the object at the desired location. Therefore, to facilitate effective interaction between humans and robots, distinguishing between pick and place actions is essential. However, existing research still faces limitations in differentiating pick-and-place actions based on features and methods. Consequently, this dissertation utilized features based on gaze and arm motion to better understand and estimate the intentions of pick- and-place actions, aiming for a more natural and accurate interpretation of human behavior. In this dissertation, a virtual environment was created to collect pick-and-place movement data, and a long short-term memory (LSTM) learning-based algorithm was employed for training. This approach achieved an accuracy of 94.9%, and the real-time prediction accuracy for pick-and-place actions averaged at 85% correct classification on average for the middle of the trajectory (approximately 410 msec). Through this research, we have contributed to overcoming the limitations of previous research and enhancing the understanding of distinguishing pick-and-place actions for effective human-robot interaction. Consequently, these results are expected to provide crucial information for achieving natural and effective interaction between humans and robots.
Publisher
Ulsan National Institute of Science and Technology

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.