Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots that post garbled vitriol in response to particular news articles or official statements, are entirely political.
It’s easy to assume this entire phenomenon is powered by advanced computer science. Indeed, I’ve talked to many people who think machine learning algorithms driven by machine learning or artificial intelligence are giving political bots the ability to learn from their surroundings and interact with people in a sophisticated way.
During events in which researchers now believe political bots and disinformation played a key role—the Brexit referendum, the Trump-Clinton contest in 2016, the Crimea crisis—there is a widespread belief that smart AI tools allowed computers to pose as humans and help manipulate the public conversation.