File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 2453 -
dc.citation.number 6 -
dc.citation.startPage 2444 -
dc.citation.title IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS -
dc.citation.volume 33 -
dc.contributor.author Xie, Qin -
dc.contributor.author Zhang, Peng -
dc.contributor.author Yu, Boseon -
dc.contributor.author Choi, Jaesik -
dc.date.accessioned 2023-12-21T14:09:55Z -
dc.date.available 2023-12-21T14:09:55Z -
dc.date.created 2022-01-12 -
dc.date.issued 2022-06 -
dc.description.abstract Abnormal behaviors in industrial systems may be early warnings on critical events that may cause severe damages to facilities and security. Thus, it is important to detect abnormal behaviors accurately and timely. However, the anomaly detection problem is hard to solve in practice, mainly due to the rareness and the expensive cost to get the labels of the anomalies. Deep generative models parameterized by neural networks have achieved state-of-the-art performance in practice for many unsupervised and semisupervised learning tasks. We present a new deep generative model, Latent Enhanced regression/classification Deep Generative Model (LEDGM), for the anomaly detection problem with multidimensional data. Instead of using two-stage decoupled models, we adopt an end-to-end learning paradigm. Instead of conditioning the latent on the class label, LEDGM conditions the label prediction on the learned latent so that the optimization goal is more in favor of better anomaly detection than better reconstruction that the previously proposed deep generative models have been trained for. Experimental results on several synthetic and real-world small- and large-scale datasets demonstrate that LEDGM can achieve improved anomaly detection performance on multidimensional data with very sparse labels. The results also suggest that both labeled anomalies and labeled normal are valuable for semisupervised learning. Generally, our results show that better performance can be achieved with more labeled data. The ablation experiments show that both the original input and the learned latent provide meaningful information for LEDGM to achieve high performance. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.33, no.6, pp.2444 - 2453 -
dc.identifier.doi 10.1109/TNNLS.2021.3095150 -
dc.identifier.issn 2162-237X -
dc.identifier.scopusid 2-s2.0-85111016191 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/56601 -
dc.identifier.url https://ieeexplore.ieee.org/document/9492295 -
dc.identifier.wosid 000732323500001 -
dc.language 영어 -
dc.publisher IEEE Computational Intelligence Society -
dc.title Semisupervised Training of Deep Generative Models for High-Dimensional Anomaly Detection -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Computer Science, Hardware & Architecture; Computer Science, Theory & Methods; Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Computer Science; Engineering -
dc.type.docType Article; Early Access -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor variational autoencoder (VAE) -
dc.subject.keywordAuthor Anomaly detection -
dc.subject.keywordAuthor Data models -
dc.subject.keywordAuthor Semisupervised learning -
dc.subject.keywordAuthor Generative adversarial networks -
dc.subject.keywordAuthor deep generative models -
dc.subject.keywordAuthor semisupervised learning -
dc.subject.keywordAuthor Training -
dc.subject.keywordAuthor Generators -
dc.subject.keywordAuthor Unsupervised learning -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.