Think tank wants tech firms to be held responsible for online terrorism

Policy Exchange wants to hold tech companies’, journalists’ and academics’ feet to the fire when it comes to countering online extremism.

Think tank wants tech firms to be held responsible for online terrorism

I actively try to avoid idiots, but every once in a while I accidentally step in a pile of idiot juice. Today, that happened to be an “investigative” report that basically blames Amazon for aiding terrorists.

The article in question is “Potentially deadly bomb ingredients are ‘frequently bought together’ on Amazon.” In essence, it makes it sound like Amazon is a one-stop shop for terrorists.

The investigation by Channel 4 News allegedly reveals “how Amazon’s algorithm can guide users to the chemical combinations for producing explosives.” The report blames Amazon for inciting terrorism by listing an item that is innocent by itself but could result in an explosive device being built when added to items listed in the "frequently bought together" or via "customers who bought this item also bought" section.

New Netwar

While it’s not exactly the same thing, a new report by U.K. think tank Policy Exchange also attempts to put a good amount of blame on tech companies for not doing enough to counter online extremism. Granted, ISIS is a problem, but the paper, The New Netwar: Countering Extremism Online (pdf), touches on some worrisome ideas.

In fact, the forward, which was written by former CIA director General David Petraeus, includes this:

It is evident too that when it comes to discussions about online extremism, we need to ask ourselves difficult questions and contemplate uncomfortable answers. Do police and security services have the powers that they need to combat the threat? What more can we do to ‘raise the bar’ in terms of de-incentivizing the possession and consumption of extremist content? At a broader level, do we have the balance right, between freedom of speech and privacy rights on the one hand, and security on the other? And how far should democratic governments interpose themselves into the online space?

When the word “balance” is used regarding privacy and security, brace for privacy to lose.

No, I don’t want to see the terrorists gain ground, not even in the cyber domain, but I also don’t want to see freedom of speech take a hit either. How much is too much when informing the public about terrorists?

According to the report:

Just as many have called for social media companies to be responsible for the use (and abuse) of their platforms, media outlets, journalists and researchers must be far more careful when it comes to (re-)posting content that, however inadvertently, amplifies the reach of ISIS. As this research has shown, it is not academic research per se, that is the problem, but the careless dissemination of ISIS content and announcements on social media and public blogs, which increases its findability and availability. ... A simple and easy first step would be to encourage researchers/journalists to sign up to an ethical code of conduct, specifically focused on those who study illegal groups and content.

Overall, I get the gist to hamper jihadist swarmcasts, how they manage to keep online propaganda alive, but where would the line be drawn when reporting on ISIS? The paper admitted that if academics and news agencies stop publishing ISIS material, “the speed, agility and resilience of the media mujahidin would still present a significant challenge. However, it is imperative that each organization and individual takes responsibility for their own behavior to ensure they do not amplify the reach and findability of content, however inadvertent that may have been in the past.”

Britain’s Home Office, the paper said, might hold universities or media outlets accountable for content that could be consumed by the public or regurgitated on blogs and social media in a similar way that has been done to stop the indecent images of children from being posted online without consequences.

Furthermore, the “the government could consider new legislation that would criminalize the 'aggravated possession and/or persistent consumption of material that promotes hatred and violence in the service of a political ideology.’”

Report says tech companies aren't doing enough

The biggest chunk of the report is aimed at tech companies not doing enough to combat and to take down online extremist material. It is based, more or less, on the premise that tech companies should be treated as the publishers of extremist content.

Human moderation, which relies on content to be flagged by users, is a fail, but despite Facebook, Google, Twitter and Microsoft forming the Global Internet Forum to tackle terrorism, there have not been huge improvements to artificial intelligence to identify and automatically remove extremist content.

Yet the report pointed out that when “it suits their interests to act,” tech companies can quickly develop new tech. The paper cited an example of Facebook developing new facial recognition software that can capture emotional states and moods in real time.

Under a header about internet companies “drying up the online supply,” the report talked about efforts endorsed by the Five Eyes intelligence alliance made up of the U.S., U.K., Australia, Canada and New Zealand.

Although the report mentioned what tech companies have already tried to combat online extremism, such as what Google has done regarding YouTube, as well as Twitter and Facebook, New Netwar suggested that legislation could be used to prosecute repeat tech company offenders, such as those with algorithms that "recommend" extremist content, place ads on it and promote it.

The report also suggested there should be a new independent regulator of social media content “to make sure that ‘people who watch television and listen to the radio are protected from harmful or offensive material’ and that ‘viewers of video on demand services are protected from harmful content.’”

The paper suggested internet companies should be required “to work with and fund the efforts of an expanded Counter Terrorism Internet Referral Unit.” The CTIRU is based in the Metropolitan Police’s Count Terrorism Command.

While there are many suggestions, a couple of options for change that stood out included “civil remedies that treat the possession of extremist material as a form of anti-social behavior” and “criminalizing possession and consumption.”

Security Smart: 4 Common Password Myths ... Debunked!