What trust and safety leaders need to know after Google, Facebook and Twitter Senate hearings

And how social platforms can overcome increased scrutiny and regulations.

Government building with greek columns
Thinkstock

Tech giants Facebook, Google, and Twitter found themselves squarely in the crosshairs of lawmakers last week as representatives from the three companies went before the U.S. Senate to testify. Republicans and Democrats put up a unified front against the tech giants. Questions of free speech, censorship, fake news and the possibility that the 2016 presidential election was influenced by Russia were all part of the hearings.

In addition to concerns over private interests and foreign powers influencing public opinion with massive propaganda campaigns, lawmakers also showed concern about illicit and illegal behavior like human trafficking and fraud.

In a recent poll conducted by Axios.com, between 28 and 29% of consumers view Facebook and Twitter either somewhat unfavorably or very unfavorably, a far worse opinion than of other major tech firms.

The survey found Facebook’s biggest public relations problems is fake news, while Twitter’s is bots pretending to be users—including the Russian troll bots that help Russian propaganda become trending topics. Recognizing the writing on the wall, Facebook plans to double its safety and security team from 20,000 to 40,000 people by the end of 2018.

However, Facebook and Twitter aren’t the only brands on trial. Lawmakers are also concerned that search engines, online marketplaces, and the digital ad ecosystem enable bad actors like human traffickers, spammers, identity thieves, and credit card fraudsters.

Even before the hearings, Congress had been working on new legislation. Perhaps the most draconian example is the Stop Enabling Sex Trafficking Act, which endeavors to hold sites and apps legally responsible for content that enables sex trafficking on their platforms. The hearings also put a spotlight on the Honest Ads Act, which holds publishers responsible for building better ad transparency, including disclosing the advertisers.

Moving forward, we can bet that there will be increased scrutiny over tech firms’ ability to police user-generated content and third-party digital ads published on their web and mobile properties. If your company is part of this ecosystem, it would be wise to prepare for more governmental scrutiny and new regulations.

To keep your organization ahead of the curve, the first thing you should consider is corporate culture and ensuring good Trust and Safety practices are embedded at every level. Arguably the most important challenge is to inject trust and safety into the product development lifecycle. It can’t be understated how important it is that products are built with safety and security from day one.

How are you handling user reporting? It is your best early warning system and a great resource for catching trends. Too many companies ignore their users at their own peril.

In terms of creating or managing a Trust and Safety program, your goal is to be proactively shutting down bad behavior. Too many companies are reactive. A week is simply too long, you want to be addressing issues as close to real time as possible.

To be proactive, you need a way to zoom out and see things on a macro level, so you can see trends in aggregate. Let's say you’re a social network and you catch an actor using the platform to spread SPAM messages to other users. How can you be certain shutting down that profile solves the problem? What is stopping them from creating a new profile and coming back as a ‘different user’? You want a platform that can show you related entities (e.g. multiple users coming from a single device), so you can shut down repeat offenders once and for all.

One of the most important features of a Trust and Safety program is the human element. Keep humans in the loop! While machine learning and automation are very important attributes of a proactive program, humans need to be involved to reclassify events to stay ahead of the dynamic threats you face.

In the war against bad actors nothing is more potent than a knowledgeable analyst, that knows your industry, and who is armed with highly enriched, timely, and accurate data. Because these people are rare, their time is critical. They need a system that will give them a constant stream of bad user signals, in the context of the behavior or threat they’re tracking in near realtime. You can’t afford to have these people sifting through reams of useless data looking for a needle in a haystack.

Finally, get smart people who can create rules tailored to your policies. Enable these people with a platform that has its own rules engine, so they (or you) don’t have to constantly petition for development lifecycles. Lowering fraud should not be a Sophie’s Choice between working cases or project managing engineering.

It may seem overwhelming, but many of your peers are already building these types of programs. It is the best chance to stay ahead of bad actors, increased governmental scrutiny, and legislation.

This article is published as part of the IDG Contributor Network. Want to Join?

NEW! Download the Winter 2018 issue of Security Smart