On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein’s Unbiased Risk Estimator
Cited 0 times inCited 0 times in
- On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein’s Unbiased Risk Estimator
- Soltanayev, Shakarim; Giryes, Raja; Chun, Se Young; Eldar, Yonina C.
- Issue Date
- IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3592 - 3596
- Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein’s unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
- Appears in Collections:
- AI_Conference Papers
- Files in This Item:
- There are no files associated with this item.
can give you direct access to the published full text of this article. (UNISTARs only)
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.