File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

안혜민

Ahn, Hyemin
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Generative Autoregressive Networks for 3D Dancing Move Synthesis From Music

Author(s)
Ahn, HyeminKim, JaehunKim, KihyunOh, Songhwai
Issued Date
2020-04
DOI
10.1109/LRA.2020.2977333
URI
https://scholarworks.unist.ac.kr/handle/201301/58678
Citation
IEEE ROBOTICS AND AUTOMATION LETTERS, v.5, no.2, pp.3501 - 3508
Abstract
This letter proposes a framework which is able to generate a sequence of three-dimensional human dance poses for a given music. The proposed framework consists of three components: a music feature encoder, a pose generator, and a music genre classifier. We focus on integrating these components for generating a realistic 3D human dancing move from music, which can be applied to artificial agents and humanoid robots. The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 1,000 pose frames. Experimental results of generated dance sequences from various songs show how the proposed method generates human-like dancing move to a given music. In addition, a generated 3D dance sequence is applied to a humanoid robot, showing that the proposed framework can make a robot to dance just by listening to music.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
ISSN
2377-3766
Keyword (Author)
Three-dimensional displaysGeneratorsTask analysisMultiple signal classificationSkeletonTrainingMusicGestureposture and facial expressionsnovel deep learning methodsentertainment robotics

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.