Generative deep learning models are artificial intelligence (AI) systems that can create texts, images, audio files, and videos for specific purposes, following instructions provided by human users. Over the past few years, the content generated by these models has become increasingly realistic and is often difficult to distinguish from real content.
Many of the videos and images circulating on social media platforms today are created by generative deep learning models, yet the effects of these videos on the users viewing them have not yet been clearly elucidated. Concurrently, some computer scientists have proposed strategies to mitigate the possible adverse effects of fake content diffusion, such as clearly labeling these videos as AI-generated.
Researchers at University of Bristol recently carried out a new study set out to better understand the influence of deepfake videos on viewers, while also assessing user perceptions when AI-generated videos are labeled as “fake.” Their findings, published in Communications Psychology, suggest that knowing that a video was created with AI does not always make it less “persuasive” for viewers.
