File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Zero-Shot Semantic Segmentation via Spatial and Multi-Scale Aware Visual Class Embedding

Author(s)
Cha, Sungguk
Advisor
Kim, Kwang In
Issued Date
2021-02
URI
https://scholarworks.unist.ac.kr/handle/201301/82426 http://unist.dcollection.net/common/orgView/200000371667
Abstract
As a cost effective learning, word based zero-shot semantic segmentation (w-ZSSS) approaches are proposed, which recognizes an unseen target class only with a word vector and without a supporting image. The expressiveness of w-ZSSS is limited because their class representation of a novel class is constant. Tackling w-ZSSS, we propose a Spatial and Multi-scale aware Visual Class Embedding Network (SM-VCENet) for zero-shot semantic segmentation. SM-VCENet generates visual class embedding of an unseen class by transferring visual context knowledge on the query image, resulting domain-aware class representation. SM-VCENet enriches visual information of visual class embedding by incorporating multi-scale attention and spatial attention. Our SM-VCENet outperforms the state-of-the-art with a noticeable margin on the PASCAL and COCO test sets. We also propose a novel benchmark (PASCAL2COCO) for zero-shot semantic segmentation, which includes domain adaptation and more challenging samples.
Publisher
Ulsan National Institute of Science and Technology (UNIST)
Degree
Master
Major
Department of Computer Science and Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.