File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

황성주

Hwang, Sung Ju
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Sharing Features between Objects and Their Attributes

Author(s)
Hwang, Sung JuSha, FeiGrauman, Kristen
Issued Date
2011-06-22
DOI
10.1109/CVPR.2011.5995543
URI
https://scholarworks.unist.ac.kr/handle/201301/35737
Fulltext
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5995543&tag=1
Citation
IEEE Conference on Computer Vision and Pattern Recognition, pp.1761 - 1768
Abstract
Visual attributes expose human-defined semantics to object recognition models, but existing work largely restricts their influence to mid-level cues during classifier training. Rather than treat attributes as intermediate features, we consider how learning visual properties in concert with object categories can regularize the models for both. Given a low-level visual feature space together with attribute-and object-labeled image data, we learn a shared lower-dimensional representation by optimizing a joint loss function that favors common sparsity patterns across both types of prediction tasks. We adopt a recent kernelized formulation of convex multi-task feature learning, in which one alternates between learning the common features and learning task-specific classifier parameters on top of those features. In this way, our approach discovers any structure among the image descriptors that is relevant to both tasks, and allows the top-down semantics to restrict the hypothesis space of the ultimate object classifiers. We validate the approach on datasets of animals and outdoor scenes, and show significant improvements over traditional multi-class object classifiers and direct attribute prediction models.
Publisher
IEEE, Computer Vision Society
ISSN
1063-6919

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.