File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

윤성환

Yoon, Sung Whan
Machine Intelligence and Information Learning Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace 미국 하와이 와이콜로아 -
dc.citation.endPage 2245 -
dc.citation.startPage 2236 -
dc.citation.title IEEE/CVF Winter Conference on Applications of Computer Vision -
dc.contributor.author Kim, Solang -
dc.contributor.author Jeong, Yuho -
dc.contributor.author Park, Joon Sung -
dc.contributor.author Yoon, Sung Whan -
dc.date.accessioned 2024-01-19T12:05:46Z -
dc.date.available 2024-01-19T12:05:46Z -
dc.date.created 2024-01-15 -
dc.date.issued 2024-01-04 -
dc.description.abstract Few-shot class-incremental learning (FSCIL) aims to learn a classification model for continually accepting novel classes with a few samples. The key of FSCIL is the joint success of the following two training stages: Base training stage to classify base classes and Incremental training stage with sequential learning of novel classes. However, recent efforts show a tendency to focus on one of the stages, or separately design strategies for each stage, so that less effort has been paid to devise a consistent strategy across the consecutive stages. In this paper, we first emphasize the particular aspects of the successful FSCIL algorithm that are worthwhile to consistently pursue during both stages, i.e., intra-class compactness and inter-class separability of the representation, which allows a model to reserve feature space in between current classes for preparing the acceptance of novel classes in the future. To achieve these aspects, we propose a mixup-based FSCIL method called MICS, which theoretically guarantees to enlarge the thickness of the margin space between different classes, leading to outstanding performance on the existing benchmarks. Code is available at https://github.com/solangii/MICS. -
dc.identifier.bibliographicCitation IEEE/CVF Winter Conference on Applications of Computer Vision , pp.2236 - 2245 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/68079 -
dc.identifier.url https://openaccess.thecvf.com/content/WACV2024/html/Kim_MICS_Midpoint_Interpolation_To_Learn_Compact_and_Separated_Representations_for_WACV_2024_paper.html -
dc.language 영어 -
dc.publisher IEEE/CVF -
dc.title MICS: Midpoint Interpolation To Learn Compact and Separated Representations for Few-Shot Class-Incremental Learning -
dc.type Conference Paper -
dc.date.conferenceDate 2024-01-04 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.