File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

공태식

Gong, Taesik
Ubiquitous AI Lab
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation

Author(s)
Lee, T.Chottananurak, S.Gong, TaesikLee, S.-J.
Issued Date
2024-06-16
DOI
10.1109/CVPR52733.2024.02706
URI
https://scholarworks.unist.ac.kr/handle/201301/84395
Citation
IEEE Conference on Computer Vision and Pattern Recognition, pp.28643 - 28652
Abstract
Test-time adaptation (TTA) has emerged as a viable solution to adapt pretrained models to domain shifts using unlabeled test data. However, TTA faces challenges of adaptation failures due to its reliance on blind adaptation to unknown test samples in dynamic scenarios. Traditional methods for out-of-distribution performance estimation are limited by unrealistic assumptions in the TTA context, such as requiring labeled data or retraining models. To address this issue, we propose AETTA, a label-free accuracy estimation algorithm for TTA. We propose the prediction disagreement as the accuracy estimate, calculated by comparing the target model prediction with dropout inferences. We then improve the prediction disagreement to extend the applicability of AETTA under adaptation failures. Our extensive evaluation with four baselines and six TTA methods demonstrates that AETTA shows an average of 19.8%p more accurate estimation compared with the baselines. We further demonstrate the effectiveness of accuracy estimation with a model recovery case study, showcasing the practicality of our model recovery based on accuracy estimation. The source code is available at https://github.com/taeckyung/AETTA. © 2024 IEEE.
Publisher
IEEE Computer Society
ISSN
1063-6919

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.