File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

황성주

Hwang, Sung Ju
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Phoenix Convention Center -
dc.citation.title AAAI Conference on Artificial Intelligence -
dc.contributor.author Kuznetsova, Alina -
dc.contributor.author Hwang, Sung Ju -
dc.contributor.author Rosenhahn, Bodo -
dc.contributor.author Sigal, Leonid -
dc.date.accessioned 2023-12-19T21:09:12Z -
dc.date.available 2023-12-19T21:09:12Z -
dc.date.created 2016-02-21 -
dc.date.issued 2016-02-16 -
dc.description.abstract Viewpoint estimation, especially in case of multiple object classes, remains an important and challenging problem. First, objects under different views undergo extreme appearance variations, often making withinclass variance larger than between-class variance. Second,
obtaining precise ground truth for real-world images, necessary for training supervised viewpoint estimation models, is extremely difficult and time consuming. As a result, annotated data is often available only for a limited number of classes. Hence it is desirable to share viewpoint information across classes. Additional complexity arises from unaligned pose labels between classes, i.e. a side view of a car might look more like a
frontal view of a toaster, than its side view. To address
these problems, we propose a metric learning approach for joint class prediction and pose estimation. Our approach allows to circumvent the problem of viewpoint alignment across multiple classes, and does not require dense viewpoint labels. Moreover, we show, that the learned metric generalizes to new classes, for which the pose labels are not available, and therefore makes it possible to use only partially annotated training sets, relying on the intrinsic similarities in the viewpoint manifolds. We evaluate our approach on two challenging multi-class datasets, 3DObjects and PASCAL3D+.
-
dc.identifier.bibliographicCitation AAAI Conference on Artificial Intelligence -
dc.identifier.scopusid 2-s2.0-84990043919 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/35434 -
dc.identifier.url http://dl.acm.org/citation.cfm?id=3016399 -
dc.language 영어 -
dc.publisher AAAI -
dc.title Exploiting View-Specific Appearance Similarities Across Classes for Zero-shot Pose Prediction: A Metric Learning Approach -
dc.type Conference Paper -
dc.date.conferenceDate 2016-02-12 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.