File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.contributor.advisor Kim, Taehwan -
dc.contributor.author Kim, Hyeonyu -
dc.date.accessioned 2024-04-11T15:20:01Z -
dc.date.available 2024-04-11T15:20:01Z -
dc.date.issued 2024-02 -
dc.description.abstract Automatic songwriting aims to generate lyrics and/or melodies to aid human music creation. In this study, we address the long-range lyric-and-melody generation, which has received less attention compared to the lyric-to-melody and melody-to-lyric generation. We propose a novel unified model designed to effectively integrate multi-modal features and simultaneously generate lyrics and melody. To accommodate much longer sequences, we employ four transformer decoders to separately model lyrics and three-note values. Both qualitative and quantitative results show that our method can create coherent lyric-melody pairs with much longer context. -
dc.description.degree Master -
dc.description Graduate School of Artificial Intelligence -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/82154 -
dc.identifier.uri http://unist.dcollection.net/common/orgView/200000743489 -
dc.language ENG -
dc.publisher Ulsan National Institute of Science and Technology -
dc.rights.embargoReleaseDate 9999-12-31 -
dc.rights.embargoReleaseTerms 9999-12-31 -
dc.title Long-range coherent lyrics and melody Co-generation -
dc.type Thesis -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.