File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김형훈

Kim, Hyounghun
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

FIXMYPOSE: Pose Correctional Captioning and Retrieval

Author(s)
Kim, HyounghunZala, AbhayBurri, GrahamBansal, Mohit
Issued Date
2021-02-02
URI
https://scholarworks.unist.ac.kr/handle/201301/77635
Citation
AAAI Conference on Artificial Intelligence, pp.13161 - 13170
Abstract
Interest in physical therapy and individual exercises such as yoga/dance has increased alongside the well-being trend, and people globally enjoy such exercises at home/office via video streaming platforms. However, such exercises are hard to follow without expert guidance. Even if experts can help, it is almost impossible to give personalized feedback to every trainee remotely. Thus, automated pose correction systems are required more than ever, and we introduce a new captioning dataset named FIXMYPOSE to address this need. We collect natural language descriptions of correcting a "current" pose to look like a "target" pose. To support a multilingual setup, we collect descriptions in both English and Hindi. The collected descriptions have interesting linguistic properties such as egocentric relations to the environment objects, analogous references, etc., requiring an understanding of spatial relations and commonsense knowledge about postures. Further, to avoid ML biases, we maintain a balance across characters with diverse demographics, who perform a variety of movements in several interior environments (e.g., homes, offices). From our FIXMYPOSE dataset, we introduce two tasks: the pose-correctional-captioning task and its reverse, the targetpose-retrieval task. During the correctional-captioning task, models must generate the descriptions of how to move from the current to the target pose image, whereas in the retrieval task, models should select the correct target pose given the initial pose and the correctional description. We present strong cross-attention baseline models (uni/multimodal, RL, multilingual) and also show that our baselines are competitive with other models when evaluated on other image-difference datasets. We also propose new task-specific metrics (object-match, body-part-match, direction-match) and conduct human evaluation for more reliable evaluation, and we demonstrate a large human-model performance gap suggesting room for promising future work. Finally, to verify the sim-to-real transfer of our FIXMYPOSE dataset, we collect a set of real images and show promising performance on these images. Data and code are available: https://fixmypose-unc.github.io.
Publisher
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE
ISSN
2159-5399

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.