Fretting over fake news? It's only going to get worse

Soon, not even experts will be able to tell the difference between fraudulent and genuine content. Ultimately, it comes down to the reputation of whoever created it

Fretting over fake news? It's only going to get worse
Karolina Grabowska.Staffage

If you’re worried about fake news, you ain’t seen nothing yet. Soon we may not be able to tell the difference between a fake video and a real one, even forensically. What we are seeing today is the tip of the iceberg.

Fake news has already altered the world forever. It’s always been a huge business—the National Enquirer has been around since 1926. But fake news came to a head in the 2016 U.S. presidential election and, more recently, even led a Pakistani minister to threaten nuclear war. Today, the big social media sites are trying to come up with new ways to filter out the crap.

Ever since I learned that Hollywood was working on a method to re-create our favorite movie stars as digital doppelgangers, I’ve known that our days of being able to tell real versus fake are numbered. I mean, today's live concerts feature incredibly lifelike holograms of dead rockers.

Soon enough, everyone will have the capability of creating really good fake content. Adobe’s Project Voco lets you not only easily edit existing speech, but create new speech that sounds as if the original person said it. If you want to see how far we’ve come technologically, read the Verge’s “Artificial intelligence is going to make it easier than ever to fake audio and video.” Software to automate the creation of fake content has come a long way.

What is reality?

As it stands, we humans are bad at figuring out what is and isn’t fake. We have decades of experience with email phishing, for example, to tell us we’re pretty awful at detecting forgeries.

That only applies to old people who aren’t really comfortable with computers, right? Not so fast. A recent Stanford University study revealed that young folks aren’t much better. More than 80 percent of the tested students identified a story with the words “sponsored content” prominently placed as a legitimate news story. To add to the confusion, the leader of the free world is calling any news story he disagrees with “fake news”— and a large percentage of his followers agree.

Thus, you have fake news that’s believed by tens of millions of people and real news disbelieved by a similar amount. It’s all very confusing. I have little doubt there will come a time when even professional investigators will be unable to tell the difference between real and fake news.

Validation by reputation

We’re left, then, to rely on the reputation of the services that provide us with news. Some political supporters might try to convince you that established news sources aren’t trustworthy anymore, but that’s far from the truth. Sure, reporters and editors sometimes make mistakes, and occasionally outlets get tricked into into publishing incorrect information, but for the most part they do a consistent, reliable job.

Part of the reason is that their operating culture has long required any fact be confirmed and cross-checked before it can be published. Citizen bloggers can publish anything they like without verification. But a legitimate news service observes quality control processes that, under normal circumstances, cannot be subverted.

There’s a cost to the fact checking: speed. No one wants to be second to publish, especially when they have the “facts” first. But established news organizations have wrestled with such challenges for decades.

Future solutions

Many social media sites already use visible cues such as blue check marks to indicate that an article has been validated. In the future, we may see third-party reputation services that would function much like traditional Certification Authorities that attest to the “realness” of a news story or content.

Who knows? You might even have an antivirus-like scanner service that you run against a displayed news story to return a rating. Right-click the website content and a “reputation score” comes back.

Trust or lack thereof applies equally to the worlds of security and content. You have fake tech support scams calling people’s houses. You have fake law enforcement and IRS calls threatening jail time if people don’t run to their closest store to buy a “green card” or wire the perp for payment. It’s getting so that you have to be skeptical of any new claim in order to survive in this world. Perhaps we’ll end up with a real-time “Minority Report”-style display we can call up whenever a reputation question arises in the digital world or otherwise. That would be the true “pre-crime” detector.

Society runs on trust. Right now fake news and fake content are causing cultural disruption. I’m betting it’s cyclical. Ultimately, particularly if we have new tools to help us, I am hopeful we’ll become more trusting again.

New! Download the State of Cybercrime 2017 report