File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전세영

Chun, Se Young
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein’s Unbiased Risk Estimator

Author(s)
Soltanayev, ShakarimGiryes, RajaChun, Se YoungEldar, Yonina C.
Issued Date
2020-05-04
DOI
10.1109/icassp40776.2020.9054593
URI
https://scholarworks.unist.ac.kr/handle/201301/78544
Fulltext
https://ieeexplore.ieee.org/document/9054593
Citation
IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3592 - 3596
Abstract
Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein’s unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
Publisher
IEEE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.