File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace UK -
dc.citation.conferencePlace London -
dc.citation.title British Machine Vision Conference -
dc.contributor.author Baek, Seungryul -
dc.contributor.author Shi, Zhiyuan -
dc.contributor.author Kawade, Masato -
dc.contributor.author Kim, Tae-Kyun -
dc.date.accessioned 2023-12-19T18:11:52Z -
dc.date.available 2023-12-19T18:11:52Z -
dc.date.created 2020-04-21 -
dc.date.issued 2017-09-04 -
dc.description.abstract In this paper, we tackle the problem of 24 hours-monitoring patient actions in a ward such as "stretching an arm out of the bed", "falling out of the bed", where temporal movements are subtle or significant. In the concerned scenarios, the relations between scene layouts and body kinematics (skeletons) become important cues to recognize actions; however they are hard to be secured at a testing stage. To address this problem, we propose a kinematic-layout-aware random forest which takes into account the kinematic-layout (\ie layout and skeletons), to maximize the discriminative power of depth image appearance. We integrate the kinematic-layout in the split criteria of random forests to guide the learning process by 1) determining the switch to either the depth appearance or the kinematic-layout information, and 2) implicitly closing the gap between two distributions obtained by the kinematic-layout and the appearance, when the kinematic-layout appears useful. The kinematic-layout information is not required for the test data, thus called "privileged information prior". The proposed method has also been testified in cross-view settings, by the use of view-invariant features and enforcing the consistency among synthetic-view data. Experimental evaluations on our new dataset PATIENT, CAD-60 and UWA3D (multiview) demonstrate that our method outperforms various state-of-the-arts. -
dc.identifier.bibliographicCitation British Machine Vision Conference -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/32672 -
dc.identifier.url https://arxiv.org/abs/1607.06972 -
dc.publisher British Machine Vision Association -
dc.title Kinematic-layoutaware random forests for depth-based action recognition -
dc.type Conference Paper -
dc.date.conferenceDate 2017-09-04 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.