File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Interpreting Internal Activation Patterns in Deep Temporal Neural Networks by Finding Prototypes

Author(s)
Cho, SoheeChang, WonjoonLee, GinkyengChoi, Jaesik
Issued Date
2021-08-14
DOI
10.1145/3447548.3467346
URI
https://scholarworks.unist.ac.kr/handle/201301/77097
Fulltext
https://dl.acm.org/doi/10.1145/3447548.3467346
Citation
27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp.158 - 166
Abstract
Deep neural networks have demonstrated competitive performance in classification tasks for sequential data. However, it remains difficult to understand which temporal patterns the internal channels of deep neural networks capture for decision-making in sequential data. To address this issue, we propose a new framework with which to visualize temporal representations learned in deep neural networks without hand-crafted segmentation labels. Given input data, our framework extracts highly activated temporal regions that contribute to activating internal nodes and characterizes such regions by prototype selection method based on Maximum Mean Discrepancy. Representative temporal patterns referred to here as Prototypes of Temporally Activated Patterns (PTAP) provide core examples of subsequences in the sequential data for interpretability. We also analyze the role of each channel by Value-LRP plots using representative prototypes and the distribution of the input attribution. Input attribution plots give visual information to recognize the shapes focused on by the channel for decision-making.
Publisher
ASSOC COMPUTING MACHINERY

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.