File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Why Do Masked Neural Language Models Still Need Semantic Knowledge in Question Answering?

Author(s)
Kang, Cheongwoong
Advisor
Kim, Kwang-In
Issued Date
2021-02
URI
https://scholarworks.unist.ac.kr/handle/201301/82424 http://unist.dcollection.net/common/orgView/200000371426
Abstract
Pre-trained language models have widely been used to solve various natural language processing tasks. Especially, masked neural language models, which are composed of huge neural networks that are trained to restore the masked tokens, have shown outstanding performance in many tasks including text classification and question answering. However, it is challenging to identify what knowledge are trained inside due to the ‘black box’ nature of deep neural networks with numerous and intermingled parameters. There have been recent studies that try to approximate how much knowledge is learned in masked neural language models. However, a recent study reveals that the models do not precisely understand semantic knowledge while they show superhuman performance. In this work, we empirically verify that questions that require semantic knowledge are still challenging for masked neural language models to solve in question answering. Therefore, we suggest a possible solution that injects semantic knowledge from external repositories into masked neural language models.
Publisher
Ulsan National Institute of Science and Technology (UNIST)
Degree
Master
Major
Department of Computer Science and Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.