The power of platforms

In an age of information, the most dangerous dual-use technology is not kinetic.

social media hashtag
Thinkstock

Social media and sharing platforms like Twitter and Facebook were made to connect people and facilitate communication. They were started with the best of intentions and to some extent high-minded ideals, but things are not exactly working out as envisioned.

For the past year we’ve been bombarded with non-stop reporting about the impact of fake news and foreign interference in the electoral process. Russian troll farms and other threats have called into question the integrity of the political process. The medium through which these attacks took place? Platforms like Twitter and Facebook.

What has been the response from the platforms? Inadequacy. Some would say ineptitude. Forget about dealing with state-sponsored information warfare campaigns, they can’t stop ordinary people from being bullied or impersonated.

Weaponizing platforms brings about real results. Consider the the #NeverAgain and #boycottNRA campaigns in the wake of the Parkland shooting: one of the most powerful and effective responses to a mass shooting to date. While radical changes in the nation’s gun laws is unlikely, one would be hard pressed to remember an effort in recent history that has had such an economic impact on the industry’s biggest lobbyist, and public opinion in general.

The self-licking ice cream cone

The very things that make platforms useful also work against them as they try to combat offensive or false messages and themes. The recommender algorithms that strive to keep you on a platform end up pushing even more radical content to your screen, fueling your anger and outrage. Instead of connecting people to make the world a better place, platforms end up facilitating:

  • Failure to communicate. The average person looking for news cannot distinguish the real from the fake (nor can platforms). More time and effort are spent trying to debunk lies than report facts (and eventually the veracity of the fact-checkers will come into question).
  • Churn and balkanization. Those who don't give up on social media outright will leave hostile platforms for friendly ones. Lather, rinse, repeat as those who espouse a particular position dominate each new platform, forcing others out. The bully pulpit will no longer resonate with a critical mass, it will be parsed into a thousand different echo chambers.
  • Facilitating injustice. Efforts like the Shitty Media Men list are going to pop up for myriad issues like dandelions in April. The negative impact will be indiscriminate. If things go the way futurist John Robb predicts and such efforts are linked to a blockchain, there will be no place – to quote Raymond J. Donovan – to get your reputation back.

Technology is probably not the answer

While they view themselves as technology companies, social platforms are likely to find that the answer to this problem is not going to be solved by creating yet-another algorithm. Nor is it going to involve hiring thousands of humans to sort through content manually. It is futility to think that you can stop people from being offensive or spreading rumor or untruth. Especially in this day and age when you can “trigger” people at a “micro” level.  In the end, success is more likely to come from setting some new ground rules for platform participation:

  • Real names. Do you know why people don’t say the things they do online in meatspace? Because they’d get punched in the face. A lot. It's amazing how behavior changes when the ability to hold people accountable
  • Sticks and stones. If you need to be an adult to use a given platform, the onus should be on the user to deal with things like an adult, not expect the platform to play mommy or daddy. Remember that it's the Internet: hyperbole is a part of the deal.
  • What about the children? Platforms know if you’re a kid, and they know when activity associated with a kid gets weird. Making it easy for kids to discreetly report bad behavior and notifying parents of same can help avoid needless tragedy. If you’re placing a child’s privacy over their life, you have problems no algorithm can fix.

If platforms want to put technology to work, they might consider ranking or rating users on new criteria. Red checks for Nazis. Purple diamonds for the People’s Front of Judea. Five colored stars for people who reliably parrot a given party line. Identifying ‘the opposition’ and/or avoiding voices you find offensive probably does have an algorithmic solution. It would stave off balkanization, make it easier to study phenomenon, and improve our ability to identify true threats.

Finally, if platforms really want to make an effort to reduce the effectiveness of weaponization, they should seriously consider funding efforts to improve critical thinking skills for current and future users (on the platform and off). You don’t give up your neutrality by making sure people can engage with their eyes and minds open. This is, after all, primarily a people problem.

Absent a meaningful response on the part of platforms, the response from government is going to be regulation. If you think that’s unlikely, I’m sure veterans from Microsoft or AT&T would be happy to explain what happens when the government decides, lobbying efforts notwithstanding, you’re not doing enough to address a problem.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CSO delivered to your email inbox.