Americas

  • United States

Asia

Oceania

Contributor

Is the ‘right to explanation’ in Europe’s GDPR a game-changer for security analytics?

Opinion
Jan 29, 20185 mins
AnalyticsPrivacyRegulation

Making major adjustments in the types of software solutions they use to analyze personal data in the wake of the General Data Protection Regulation (GDPR).

European Union, EU
Credit: Etienne Ansotte/EU

Come May 25th of this year, the European Union’s General Data Protection Regulation (GDPR) enters into force. How are multinational companies that rely heavily on analytic software in their enterprise security and insider-threat mitigation programs ensuring they comply with the GDPR? The answer is that many are — or should be — making major adjustments in the types of software solutions they use to analyze personal data.

The GDPR is designed to strengthen security and privacy protections for data on the citizens of all 28 EU member states, including data held outside the EU by companies that count its citizens among their employees or customers. (Several non-EU countries are also adopting the GDPR.) This is the EU’s first significant regulatory refresh since its 1995 data protection directive, and the implications are profound.

Of particular relevance to the corporate security community is a new Right to Explanation accorded to all EU citizens who are subject to “automated decision-making” — that is, decisions made solely with software algorithms. (Other GDPR requirements relating to data processing storage, data mapping and access, data breaches, cross-border data transfer and the like are beyond the scope of this post.)

More than one GDPR provision is related to the right to explanation, so a brief summary is in order:

  • Article 22: Grants citizens “the right not to be subject to a decision based solely on automated processing” that “significantly affects him or her.”
  • Recital 71: The data subject should have “the right… to obtain an explanation of the decision reached… and to challenge the decision.”
  • Article 13: The data controller must provide the subject, at the time his or her personal data is obtained, with “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing” for the subject.
  • Article 15: Subjects have a right to know what personal data a company is using and how it’s being used.

I have argued for years that users of purely data-driven analytic solutions are ill-served by those systems’ utter inability to explain why a particular decision was made. I took this position not as a response to the pending arrival of the GDPR, but because it’s simply good practice for company units that are engaged in something as consequential as security to take all possible measures to ensure their decision-making approach is analytically sound, transparent, traceable and legally and technically defensible.

In September 2016, for example, I wrote in these pages that companies seeking to build a world-class insider threat program should “avoid black boxes” like pure machine-learning solutions and deep neural networks, since their underlying analytic processes and algorithms remain unknown to the user. “Insider threat cases are sensitive personnel and corporate security issues,” I wrote. “And any deployed system must provide transparency into what factors raised an individual’s risk profile, and when.” In other words, when a company censures or terminates an individual for malicious, negligent or inadvertent insider behavior, it had better be able to prove its case to company leadership, or in response to an employee appeal or wrongful termination lawsuit.

To be clear, the GDPR does not apply in certain national security and law enforcement scenarios, but that, too, accords with common sense. After all, employees in sensitive national security positions at U.S. government agencies voluntarily waive their rights to personal privacy; company employees are under no such compunction to do so — nor should they be.

Some legal scholars contend that the GDPR’s right-to-explanation provisions have no teeth, noting for example that the words “right to explanation” appear only in an unenforceable recital rather than a binding article. Others argue that the right will apply very narrowly in practice — to “significant” decisions made “solely” by automated means.

Regardless of how these provisions are applied or enforced, the EU’s underlying intent in offering citizens the means to know why they were not hired for a job, or denied a loan or fired for posing a security risk, is more than reasonable. And with fines for non-compliance reaching up to 4 percent of a company’s annual global turnover or up to €20 million (whichever is higher), what corporate leader is going to risk not complying with applicable provisions of the GDPR?

There are other existing artificial intelligence-based approaches, beyond machine learning and neural nets, that companies can adopt which provide the necessary transparency not just for GDPR compliance but for any realm where a right to explanation is the norm. For example, building probabilistic models (particularly Bayesian belief networks) to represent complex problems like insider threat detection actually forces the domain experts whose wisdom and judgments are elicited to explain their reasoning up front, in full detail, before any personally identifiable information is applied. Decisions resulting from the model-based software analytics can thus be peeled back, layer by layer, to show the entire chain of reasoning and the influence of each new piece of data on the results.

More broadly, as AI continues its unrelenting march into more products and services across more sectors of the global economy, protections relating to personal and data privacy have to keep pace. Or maybe it’s the other way around. Which could be one explanation for the recent increase in developmental activity surrounding so-called Explainable Artificial Intelligence (XAI) systems, which the U.S. Defense Advanced Research Projects Agency claims should “have the ability to explain their rationale.” What citizen, or company, wouldn’t embrace that?

Contributor

Bryan Ware is CEO of Haystax Technology. He is a pioneer in the development and application of analytic methods and tools for enterprise risk management, high consequence/low probability events, critical infrastructure protection and risk-based resource allocation. Bryan was previously the co-founder of Digital Sandbox, Inc., a provider of security risk analytics and intelligence fusion products that was acquired by Haystax in 2013.

The opinions expressed in this blog are those of Bryan Ware and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.