File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김관명

Kim, KwanMyung
Intergration and Innovation Design Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Synthetic Realities: Evaluating Human Ability to Distinguish AI-Generated Videos from Real Footage

Alternative Title
Synthetic Realities: Evaluating Human Ability to Distinguish AI-Generated Videos from Real Footage
Author(s)
Sarfraz, DanyalMubashar Karim, RajaKim, KwanMyung
Issued Date
2025-12-04
URI
https://scholarworks.unist.ac.kr/handle/201301/90247
Citation
IASDR2025
Abstract
Recent advances in generative AI have enabled multiple pathways for high‑fidelity video synthesis; text‑to‑video, image‑to‑video animation, and video outpainting. Empirical side-by-side evaluations of how untrained viewers perceive and distinguish these outputs from real footage remain scarce. In this study, we systematically compare human detection accuracy across these three AI generation techniques within three thematic contexts: historical footage, film and media content, and natural environments. We constructed a balanced stimulus set comprising equal amounts of real and AI-generated videos (18 each). The AI clips were evenly distributed across the three generation methods using Google’s Veo 3, Lightricks LTX Video, and Wan VACE. All videos were produced and standardized within the ComfyUI framework to ensure consistent quality and duration. 87 participants judged every clip in a binary forced-choice task (“real” vs. “AI-generated”). Participants correctly identified videos 60% of the time on average. Image-to-Video clips were recognized most accurately (79%), followed by real footage (64%), outpainting (49%), and text-to-video (43%). Accuracy also varied by theme: film and historical scenes yielded higher detection rates than environmental clips, which were frequently mistaken for AI. Logistic regression confirmed significant effects of both technique and theme as well as their interaction (p < 0.001), indicating that detection success depends jointly on how the content was generated and what it depicts. Findings reveal a consistent bias toward assuming synthetic origins and highlight that perceptual realism in AI video is shaped more by context than by model type, underscoring the importance of media-literacy approaches and context-aware evaluation tools for navigating increasingly synthetic visual media.
Publisher
IASDR (TDRI & CID)

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.