File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김형훈

Kim, Hyounghun
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Online -
dc.citation.endPage 4622 -
dc.citation.startPage 4609 -
dc.citation.title International Joint Conference on Natural Language Processing -
dc.contributor.author Tang, Zineng -
dc.contributor.author Zhang, Shiyue -
dc.contributor.author Kim, Hyounghun -
dc.contributor.author Bansal, Mohit -
dc.date.accessioned 2024-01-31T21:37:54Z -
dc.date.available 2024-01-31T21:37:54Z -
dc.date.created 2022-10-21 -
dc.date.issued 2021-08-01 -
dc.description.abstract Recent years have witnessed various types of generative models for natural language generation (NLG), especially RNNs or transformer based sequence-to-sequence models, as well as variational autoencoder (VAE) and generative adversarial network (GAN) based models. However, flow-based generative models, which achieve strong performance in image generation due to their invertibility and exact density estimation properties, have been less explored for NLG. In this paper, we propose a flow-based language generation model by adapting previous flow generative models to language generation via continuous input embeddings, adapted affine coupling structures, and a novel architecture for autoregressive text generation. We also apply our framework to Sequence-to-Sequence generation, including text- and video-based Question Generation (QG) and Neural Machine Translation (NMT), and data augmentation for Question Answering (QA). We use our language flow model to provide extra input features for QG and NMT, which achieves improvements over the strong QG baselines on SQuAD and TVQA and NMT baseline on WMT16. We also augment QA data with new context by injecting noise to the latent features of the language flow and show this augmentation leads to a large performance improvement from strong baselines on SQuAD and TVQA. -
dc.identifier.bibliographicCitation International Joint Conference on Natural Language Processing, pp.4609 - 4622 -
dc.identifier.scopusid 2-s2.0-85118936237 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/77122 -
dc.identifier.wosid 000698679200155 -
dc.language 영어 -
dc.publisher ASSOC COMPUTATIONAL LINGUISTICS-ACL -
dc.title Continuous Language Generative Flow -
dc.type Conference Paper -
dc.date.conferenceDate 2021-08-01 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.