File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

안혜민

Ahn, Hyemin
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Self-Supervised Motion Retargeting with Safety Guarantee

Author(s)
Choi, SungjoonSong, Min JaeAhn, HyeminKim, Joohyung
Issued Date
2021-05-30
DOI
10.1109/ICRA48506.2021.9560860
URI
https://scholarworks.unist.ac.kr/handle/201301/77335
Citation
IEEE International Conference on Robotics and Automation, pp.8097 - 8103
Abstract
In this paper, we present self-supervised shared latent embedding ((SLE)-L-3), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos. While it requires paired data consisting of human poses and their corresponding robot configurations, it significantly alleviates the necessity of time-consuming data-collection via novel paired data generating processes. Our self-supervised learning procedure consists of two steps: automatically generating paired data to bootstrap the motion retargeting, and learning a projection-invariant mapping to handle the different expressivity of humans and humanoid robots. Furthermore, our method guarantees that the generated robot pose is collision-free and satisfies position limits by utilizing nonparametric regression in the shared latent space. We demonstrate that our method can generate expressive robotic motions from both the CMU motion capture database and YouTube videos.
Publisher
IEEE
ISSN
1050-4729

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.