• United States



How to deal with the bot crisis on Twitter

Apr 10, 20175 mins
SecuritySocial EngineeringSocial Networking Apps

There’s a vast army of bots on Twitter, and they are on the move.

twitter logo eyeball
Credit: REUTERS/Fabrizio Bensch

You may have run into these bots a few times. What looks like an actual human being could have been a bot sending you Twitter spam…or even worse.

During the last election cycle and over the past few months in particular, it’s now widely known that Twitter bots — many with zero followers — promoted fake news stories. Often, the goal was to stir up dissension among voters, influence political viewpoints, and (more importantly) generate revenue when people viewed banner ads. Some would argue these bots helped elect President Trump or at least influenced people on social media to vote one way or another.

Yet, these same bot armies can do more damage than you might realize.

One of the most nefarious examples is when Twitter bots are used to inflate traffic to a website, a problem that impacts both the site owner and the advertisers. In other cases, Twitter bots act like trolls that viciously attack other users, often posting hate speech, racial slurs, and volatile comments as a way to marginalize a certain viewpoint or people group.

Now, security analysts are starting to think about how to quell the onslaught.

Assessing the damage

The first step, as always, is to assess the damage the bots can cause and the seriousness of the problem. Phillip Hallam-Baker, a vice president and principal scientist at Comodo, says there’s a big difference between what analysts know about Twitter bots and what they suspect.

For example, it’s easy to identify one bot — there are no followers, and the tweets all look like spam. It’s harder to know who is behind the bots, how they all work together, and if there are legions of human operators directing them.

One example Hallam-Baker cited is when Bernie Sanders supporters seemed to compare Hillary Clinton to Donald Trump repeatedly. After the election, he says, that chatter suddenly died out — a sign that bots were involved.

Another technique involves setting up false accounts that banter back and forth. This creates a stir because it looks like multiple users are all debating an important topic.

“We know that Russia has large operations in which people are paid to troll various chat rooms for various purposes from accounts of people who worked there,” says Hallam-Baker.

“They will do things like working in pairs pretending to have a conversation [on Twitter]. This allows them to put forward the worst arguments for a policy so they can be easily refuted.”

Dennis Egen, the founder of security-focused web-dev firm Engine Room, tells CSO that Twitter bots are also used to attack corporations. They often go after a specific brand, which is what happened recently when Coca-Cola started tweeting portions of Mein Kampf.

In fact, Egen says this type of attack is much easier to pull off than a data breach, and perhaps less risky in terms of thwarting the attack. In the end, anarchy is the goal.

Pradeep Atrey from the University at Albany and co-director of the Albany Lab for Privacy and Security tells CSO that identification is key, but not always so easy.

“These bots are interconnected and if we understand how to break through their chain, we can stop them,” he says. “However, just like we are getting smarter in recognizing them, they’re also getting smarter at doing what they do. They will imitate a real user.”

Atrey says Twitter bots can also be used to carry a payload — delivering malware, tricking users with phishing attacks, and tricking everyday users in the same way as spam.

What to do about the problem

Hallam-Baker says the Twitter bot crisis is not something security analysts have addressed properly so far. Yet, the bots are winning — they are helping to create dissension, they do tend to inflate page views at websites, and they are spreading fake news all over the web. He says Twitter bots might warrant emergency meetings — similar to what happened in the early days of spam — to decide how to fight the attacks and work toward a solution.

“I would use modern AI, which is nothing more than hyper sophisticated statistical analysis,” Hallam-Baker said. “I’d identify which approaches were most effective at driving the intended propaganda result. That data would then be used to direct human and machine based interactions and I would gather data from all those interactions to constantly improve results.”

Egen takes an interesting view on counterattacks. Often, the only way to fight bots is with more bots. You can imagine how this might work. If an army of Twitter bots sends out propaganda material or attacks a brand, another competing army might respond and call them out.

Yet, he says this is not rocket science. Some of the Twitter bots are easy to identify — they almost always follow thousands of people yet have no followers of their own. One simple tool, he says, is called Twitter Audit, which lets you find out how many of your followers are legitimate.

Still, that’s a shot in the dark. Atrey says another issue is related to false positives — obviously, not all Twitter bots are dangerous. For now, it’s a rising problem. The question is how to contain the bots, predict their behavior, and develop a plan of action before they wreak more havoc.

How do you deal with this crisis? Head to Facebook to let us know.


John Brandon is a technologist, product tester, car enthusiast and professional writer. Before becoming a writer, he worked in the corporate sector for 10 years. He has published over 8,500 articles, many of them for Computerworld, TechHive, Macworld and other IDG entities.

The opinions expressed in this blog are those of John Brandon and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author