File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Honolulu -
dc.citation.endPage 2174 -
dc.citation.startPage 2165 -
dc.citation.title IEEE Conference on Computer Vision and Pattern Recognition -
dc.contributor.author Lee, Namhoon -
dc.contributor.author Choi, Wongun -
dc.contributor.author Vernaza, Paul -
dc.contributor.author Choy, Christopher B. -
dc.contributor.author Torr, Philip H. S. -
dc.contributor.author Chandraker, Manmohan -
dc.date.accessioned 2023-12-19T18:37:22Z -
dc.date.available 2023-12-19T18:37:22Z -
dc.date.created 2020-12-02 -
dc.date.issued 2017-07-21 -
dc.description.abstract We introduce a Deep Stochastic IOC1 RNN Encoderdecoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational autoencoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods. © 2017 IEEE. -
dc.identifier.bibliographicCitation IEEE Conference on Computer Vision and Pattern Recognition, pp.2165 - 2174 -
dc.identifier.doi 10.1109/CVPR.2017.233 -
dc.identifier.issn 0000-0000 -
dc.identifier.scopusid 2-s2.0-85044340415 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/48978 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title DESIRE: Distant future prediction in dynamic scenes with interacting agents -
dc.type Conference Paper -
dc.date.conferenceDate 2017-07-21 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.