How Facebook and Google are battling internet terrorism

Leading platform providers are exploring new ways to actively engage in counter-messaging, building on robust systems to flag and remove extremist content.

combat cyber crime ts

WASHINGTON -- Social media heavyweights like Facebook and YouTube have been working with the U.S. government and other international partners as they look to take a more active role in combating terrorist propaganda and other extremist messages that have gained traction online.

Officials from the popular social network and YouTube parent Google addressed the issue here at a recent tech policy conference, where they described efforts to go beyond simply removing extremist content, and actually engaging in counter-messaging programs to present alternative narratives to those advanced by groups like ISIS.

"We're really focused on utilizing the strength that comes out of YouTube to push back on these messages," said Alexandria Walden, Google's counsel on free expression and human rights. "We know the power of our platform, and so we know that the best way to counter messages of hate and violence is to promote messages that push back against that, that push back against the hate and extremism and xenophobia around the world."

"We really do believe that technology and especially video can be a force for good," she added.

Likewise, Monika Bickert, a former assistant U.S. attorney who heads global policy management at Facebook, described a large and growing infrastructure the company has been building out to keep the site free from terrorist-inspired material, conduct research on how to effectively deliver counter-messages and form partnerships with groups around the world to amplify that content.

So, for instance, Facebook has been hiring terrorism experts who can both inform the company's own efforts to police its site and promote alternative messages, and then coordinate with organizations around the world that are engaged in similar missions.

Forging partnerships to spread counter-terror messages

In one initiative, Facebook has been partnering with universities to set up challenges for teams of students to develop counter-messaging campaigns. One developed last semester by students in Afghanistan, dubbed "Islam says no to extremism," alone reached 5 million people online, according to Bickert.

"The campaigns have reached tens of millions of people," she said. "Some of the campaigns are just absolutely amazing in terms of how many people they reach."

Google has been backing other efforts to counter extremist propaganda online, including offering up tailored ads to users who might be recruitment targets. Last September, Google launched the "Creators for change" campaign, through which the company identifies potentially influential YouTube users and works to "resource them up and help them understand how to utilize their audience, which is really millennials around the globe, to kind of convey messages that push back on hate and extremis and violence and xenophobia," Walden said.

The new administration is also picking up on the role that technology could play in countering extremist messages. On Saturday, President Trump issued a memo directing members of his national security team to develop a blueprint to defeat ISIS, requesting that the Defense Department and other agencies submit their plan within 30 days. That plan should include counter-terrorism efforts in the areas of "public diplomacy, information operations and cyber strategies to isolate and delegitimize ISIS," the memo states.

There are also encouraging signs that ISIS is losing momentum in its own propaganda efforts. An October 2016 study by the Combating Terrorism Center at West Point found that at its peak in August 2015, ISIS was responsible for circulating more than 700 distinct pieces of communication in a month. One year later, following some significant setbacks on the physical battlefield, that number was down to fewer than 200 in a month.

Robust content filtering

Apart from their counter-messaging efforts, both Facebook and Google do have robust mechanisms for flagging and removing extremist content that violates their terms of use. In the case of Facebook, that entails a far-flung team that is trained to scan for extremist content that violates the site's terms -- a tricky exercise that requires screeners who can speak the local language and can distinguish when certain materials are being used for propaganda and incitement or for legitimate purposes.

"Context really matters when you're talking about terrorism content," Bickert said. "Somebody can use the ISIS flag, a photo of the ISIS flag, and it may be the BBC saying this is something that ISIS has just done and there's a still image from one of ISIS' videos or something like that. That doesn't violate our policies. People can definitely come and talk about events of the day, but if somebody is using that ISIS flag -- an image of it to say, 'I like this group' or 'We ought to join this group,' or they're showing clips of an ISIS video and not clearly condemning it, that is something that violates our policies and we would remove it."

At Google, Walden stressed that the overwhelming majority of users have no sinister motives for using the company's services, and described the company's so-called community policing system as an effective tool for removing extremist content and other objectionable materials.

"[W]e know that community policing works," she said. "Content comes down quickly, and when it doesn't we escalate those things and make sure that it does.

This story, "How Facebook and Google are battling internet terrorism" was originally published by CIO.

Copyright © 2017 IDG Communications, Inc.

7 hot cybersecurity trends (and 2 going cold)