File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Exploiting Style Latent Flows for Generalizing Video Deepfake Detection

Author(s)
Choi, JongwookKim, TaehoonJeong, YonghyunBaek, SeungryulChoi, Jongwon
Issued Date
2024-06-19
URI
https://scholarworks.unist.ac.kr/handle/201301/85286
Citation
IEEE Conference on Computer Vision and Pattern Recognition
Abstract
This paper presents a new approach for the detection of fake videos, based on the analysis of style latent vectors and their abnormal behavior in temporal changes in the generated videos. We discovered that the generated facial videos suffer from the temporal distinctiveness in the temporal changes of style latent vectors, which are inevitable during the generation of temporally stable videos with various facial expressions and geometric transformations. Our framework utilizes the StyleGRU module, trained by contrastive learning, to represent the dynamic properties of style latent vectors. Additionally, we introduce a style attention module that integrates StyleGRU-generated features with contentbased features, enabling the detection of visual and temporal artifacts. We demonstrate our approach across various benchmark scenarios in deepfake detection, showing its superiority in cross-dataset and cross-manipulation scenarios. Through further analysis, we also validate the importance of using temporal changes of style latent vectors to improve the generality of deepfake video detection.
Publisher
IEEE/CVF

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.