File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace CN -
dc.citation.conferencePlace Montreal -
dc.citation.endPage 918 -
dc.citation.startPage 909 -
dc.citation.title 32nd Conference on Neural Information Processing Systems, NeurIPS 2018 -
dc.contributor.author Heo, J -
dc.contributor.author Lee, HB -
dc.contributor.author Lee, J -
dc.contributor.author Kim, KJ -
dc.contributor.author Yang, E -
dc.contributor.author Hwang, SJ -
dc.date.accessioned 2024-02-01T00:41:40Z -
dc.date.available 2024-02-01T00:41:40Z -
dc.date.created 2019-05-16 -
dc.date.issued 2018-12-02 -
dc.description.abstract Attention mechanism is effective in both focusing the deep learning models on relevant features and interpreting them. However, attentions may be unreliable since the networks that generate them are often trained in a weakly-supervised manner. To overcome this limitation, we introduce the notion of input-dependent uncertainty to the attention mechanism, such that it generates attention for each feature with varying degrees of noise based on the given input, to learn larger variance on instances it is uncertain about. We learn this Uncertainty-aware Attention (UA) mechanism using variational inference, and validate it on various risk prediction tasks from electronic health records on which our model significantly outperforms existing attention models. The analysis of the learned attentions shows that our model generates attentions that comply with clinicians' interpretation, and provide richer interpretation via learned variance. Further evaluation of both the accuracy of the uncertainty calibration and the prediction performance with “I don't know” decision show that UA yields networks with high reliability as well. -
dc.identifier.bibliographicCitation 32nd Conference on Neural Information Processing Systems, NeurIPS 2018, pp.909 - 918 -
dc.identifier.issn 1049-5258 -
dc.identifier.scopusid 2-s2.0-85064822164 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/80323 -
dc.language 영어 -
dc.publisher Neural information processing systems foundation -
dc.title Uncertainty-aware attention for reliable interpretation and prediction -
dc.type Conference Paper -
dc.date.conferenceDate 2018-12-02 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.