“I know it’s a deepfake”: the role of AI disclaimers and comprehension in the processing of deepfake parodies

Rapid innovations in media technologies have ushered in diverse entertainment avenues, including politically oriented content, presenting both
novel opportunities and societal challenges. This study delves into the implications of the burgeoning deepfake phenomenon, particularly focus
ing on audience interpretation and engagement with deepfake parodies, a quintessential example of “misinfotainment.” Additionally, it exam
ines the potential impact of artificial intelligence (AI) disclaimers on audience understanding and related consequences. To probe this, two
experiments (N ¼ 2,808) were executed featuring parodied politicians adopting opposing viewpoints on the issue of climate change. U.S. partici
pants were either exposed to deepfake videos prefaced with AI disclaimers or without. Results indicate that the inclusion of an AI disclaimer
significantly influenced audience comprehension and their ability to recognize the parody. These factors were subsequently associated with en
joyment, discounting, and counterarguing, which in turn showed different relationships with policy support and sharing intentions. This article
culminates with insights into the theoretical underpinnings and practical ramifications of these findings.

Hang Lu, Shupei Yuan, “I know it’s a deepfake”: the role of AI disclaimers and comprehension in the processing of deepfake parodies, Journal of Communication, Volume 74, Issue 5, October 2024, Pages 359–373, https://doi.org/10.1093/joc/jqae022

Full article here.