Artificial intelligence can be used to generate deceptive videos that damage a politician’s reputation, even when viewers suspect the footage is fake. A new study published in Communication Research found that these manipulated clips decrease support for targeted candidates. Standard fact-checking efforts reportedly fail to undo the total reputational harm.
Disinformation created using artificial intelligence is often regarded as a major threat to global elections. Technology now allows malicious actors to seamlessly replace a person’s face or clone their voice. These creations are commonly called deepfakes. Political operatives can use these tools to make opposing candidates appear to say outrageous or offensive things.
Michael Hameleers, a communication researcher at the University of Amsterdam, led a team to investigate how these videos influence the public. Hameleers and his colleagues Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber wanted to track voter reactions over time. They aimed to discover if these manipulated videos actually influence minds during an election cycle.
