File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김광인

Kim, Kwang In
Machine Learning and Vision Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace UK -
dc.citation.title European Conference on Computer Vision -
dc.contributor.author Mejjati, Youssef Alami -
dc.contributor.author Gomez, Celso F. -
dc.contributor.author Kim, Kwang In -
dc.contributor.author Shechtman, Eli -
dc.contributor.author Bylinskii Z. -
dc.date.accessioned 2024-01-31T22:39:47Z -
dc.date.available 2024-01-31T22:39:47Z -
dc.date.created 2020-09-04 -
dc.date.issued 2020-08-26 -
dc.description.abstract Across photography, marketing, and website design, being able to direct the viewer's attention is a powerful tool. Motivated by professional workflows, we introduce an automatic method to make an image region more attention-capturing via subtle image edits that maintain realism and fidelity to the original. From an input image and a user-provided mask, our GazeShiftNet model predicts a distinct set of global parametric transformations to be applied to the foreground and background image regions separately. We present the results of quantitative and qualitative experiments that demonstrate improvements over prior state-of-the-art. In contrast to existing attention shifting algorithms, our global parametric approach better preserves image semantics and avoids typical generative artifacts. Our edits enable inference at interactive rates on any image size, and easily generalize to videos. Extensions of our model allow for multi-style edits and the ability to both increase and attenuate attention in an image region. Furthermore, users can customize the edited images by dialing the edits up or down via interpolations in parameter space. This paper presents a practical tool that can simplify future image editing pipelines. -
dc.identifier.bibliographicCitation European Conference on Computer Vision -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/78256 -
dc.language 영어 -
dc.publisher ECCV 2020 -
dc.title Look here! A parametric learning based approach to redirect visual attention -
dc.type Conference Paper -
dc.date.conferenceDate 2020-08-24 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.