File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

안혜민

Ahn, Hyemin
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Text2Action: Generative Adversarial Synthesis from Language to Action

Author(s)
Ahn, HyeminHa, TimothyChoi, YunhoYoo, HwiyeonOh, Songhwai
Issued Date
2018-05-21
DOI
10.1109/ICRA.2018.8460608
URI
https://scholarworks.unist.ac.kr/handle/201301/58871
Fulltext
https://dl.acm.org/doi/abs/10.1109/ICRA.2018.8460608
Citation
IEEE International Conference on Robotics and Automation, pp.5915 - 5920
Abstract
In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network (GAN), which is based on the sequence to sequence (SEQ2SEQ) model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network (RNN) and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSR-Video-to-Text (MSR-VTT), a large-scale video dataset. We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.
Publisher
IEEE COMPUTER SOC
ISSN
1050-4729

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.