They've got the whole (Wikipedia) world in their hands.
Bots waging war for years on end, silently and endlessly arguing over tiny details on Wikipedia is, let’s be honest, pretty funny. Automatons with vendettas against each other? Come on.
But as amusing as the idea is, anthropomorphizing bot wars ignores what’s actually important about their arguments: we didn’t know they were happening. Bots account for large chunks of the internet’s activity, yet we know relatively little about how they all interact with each other. They’re just released into the World Wide Jungle to roam free. And given that they account for over half of all web traffic, we should probably know more about them. Especially since these warring bots weren’t even malicious—they were benevolent.
A group of researchers at the Oxford Internet Institute looked at nine years' worth of data on Wikipedia’s bots and found that even the helpful ones spent a lot of time contradicting each other. And more specifically, there were pairs of bots that spent years doing and undoing the same changes repeatedly. The researchers published their findings on Thursday in the journal PLOS ONE.
But sometimes two bots have opposing functions. Maybe they've been programmed to follow slightly different grammatical rules, or maybe they're tasked with linking the same word to two different existing Wikipedia pages. Whatever the disagreement, every time one of them changes something on a Wikipedia page, the other will inevitably come along and revert it back. And they’ll keep doing that ad infinitum, because they’re bots and they’ll never get tired of it. Humans will eventually notice that they’re undoing the same mistake over and over and try to find a workaround. Bots won’t stop unless the humans who created them notice what’s going on. On a site like Wikipedia, though, you might never notice. There’s no top-down structure to the bots—they have to follow certain guidelines, but anyone can submit a bot—and once approved, it goes out and edits 40 million articles on its own. If it’s in opposition to some other bot out there, it’s up to the two creators to work it out. No one’s around to police you.
Wikipedia’s user-edited format makes it easy for bot creators to set their creations loose and not really check in as much as they should. And that’s also true of the internet at large. Bots produce about a quarter of all tweets, "view" half of all ads, and send many millions of messages across various chat platforms. There’s no overarching internet police that goes around trying to shut down bots, whether harmful or not, and that means the automatons can roam unchecked, interacting with humans and bots alike. And with no one monitoring them, it’s hard to know what they get up to. Wikipedia bots only constitute a fraction of a percent of all Wikipedia editors, but as these researchers found, they perform anywhere from 10 to 50 percent of the edits. They have an outsized effect on what people see on the one of the most visited websites in the world. We should probably pay attention to their infighting.
And yes, it’s fun to point out that the bots on German Wikipedia corrected each other far less than the bots on the Portuguese or English sites did. Insert German engineering and efficiency joke here. But that totally misses the point. The difference between Portuguese and German Wikipedia bots disappears when you account for how many more edits Portuguese bots make. More edits, more fights. And though there are more bot-on-bot arguments than there are human-on-bot arguments, it’s not because bots argue more than humans. It’s just that bots edit more.
In the end, any idiocy the bots have was baked in by humans. If they argue endlessly on Wikipedia or chat boards or Twitter, it’s because we made them that way. So don’t shoot the messenger (bot).
Our editors found this article on this site using Google and regenerated it for our readers.