File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김광인

Kim, Kwang In
Machine Learning and Vision Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Preference and Artifact Analysis for Video Transitions of Places

Author(s)
Tompkin, JamesKim, Min H.Kim, Kwang InKautz, JanTheobalt, Christian
Issued Date
2013-08
DOI
10.1145/2501601
URI
https://scholarworks.unist.ac.kr/handle/201301/26252
Citation
ACM TRANSACTIONS ON APPLIED PERCEPTION, v.10, no.3, pp.13
Abstract
Emerging interfaces for video collections of places attempt to link similar content with seamless transitions. However, the automatic computer vision techniques that enable these transitions have many failure cases which lead to artifacts in the final rendered transition. Under these conditions, which transitions are preferred by participants and which artifacts are most objectionable? We perform an experiment with participants comparing seven transition types, from movie cuts and dissolves to image-based warps and virtual camera transitions, across five scenes in a city. This document describes how we condition this experiment on slight and considerable view change cases, and how we analyze the feedback from participants to find their preference for transition types and artifacts. We discover that transition preference varies with view change, that automatic rendered transitions are significantly preferred even with some artifacts, and that dissolve transitions are comparable to less-sophisticated rendered transitions. This leads to insights into what visual features are important to maintain in a rendered transition, and to an artifact ordering within our transitions.
Publisher
ASSOC COMPUTING MACHINERY
ISSN
1544-3558
Keyword (Author)
Video-based renderingvideo transition artifactsHuman factors

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.