Come May 25th of this year, the European Union's General Data Protection Regulation (GDPR) enters into force. How are multinational companies that rely heavily on analytic software in their enterprise security and insider-threat mitigation programs ensuring they comply with the GDPR? The answer is that many are \u2014 or should be \u2014 making major adjustments in the types of software solutions they use to analyze personal data.The GDPR is designed to strengthen security and privacy protections for data on the citizens of all 28 EU member states, including data held outside the EU by companies that count its citizens among their employees or customers. (Several non-EU countries are also adopting the GDPR.) This is the EU\u2019s first significant regulatory refresh since its 1995 data protection directive, and the implications are profound.Of particular relevance to the corporate security community is a new Right to Explanation accorded to all EU citizens who are subject to \u201cautomated decision-making\u201d \u2014 that is, decisions made solely with software algorithms. (Other GDPR requirements relating to data processing storage, data mapping and access, data breaches, cross-border data transfer and the like are beyond the scope of this post.)More than one GDPR provision is related to the right to explanation, so a brief summary is in order:Article 22: Grants citizens \u201cthe right not to be subject to a decision based solely on automated processing\u201d that \u201csignificantly affects him or her.\u201dRecital 71: The data subject should have "the right... to obtain an explanation of the decision reached... and to challenge the decision."Article 13: The data controller must provide the subject, at the time his or her personal data is obtained, with \u201cmeaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing\u201d for the subject.Article 15: Subjects have a right to know what personal data a company is using and how it\u2019s being used.I have argued for years that users of purely data-driven analytic solutions are ill-served by those systems\u2019 utter inability to explain why a particular decision was made. I took this position not as a response to the pending arrival of the GDPR, but because it\u2019s simply good practice for company units that are engaged in something as consequential as security to take all possible measures to ensure their decision-making approach is analytically sound, transparent, traceable and legally and technically defensible.In September 2016, for example, I wrote in these pages that companies seeking to build a world-class insider threat program should "avoid black boxes" like pure machine-learning solutions and deep neural networks, since their underlying analytic processes and algorithms remain unknown to the user. \u201cInsider threat cases are sensitive personnel and corporate security issues,\u201d I wrote. \u201cAnd any deployed system must provide transparency into what factors raised an individual\u2019s risk profile, and when.\u201d In other words, when a company censures or terminates an individual for malicious, negligent or inadvertent insider behavior, it had better be able to prove its case to company leadership, or in response to an employee appeal or wrongful termination lawsuit.To be clear, the GDPR does not apply in certain national security and law enforcement scenarios, but that, too, accords with common sense. After all, employees in sensitive national security positions at U.S. government agencies voluntarily waive their rights to personal privacy; company employees are under no such compunction to do so \u2014 nor should they be.Some legal scholars contend that the GDPR\u2019s right-to-explanation provisions have no teeth, noting for example that the words \u201cright to explanation\u201d appear only in an unenforceable recital rather than\u00a0a binding article. Others argue that the right will apply very narrowly in practice \u2014 to \u201csignificant\u201d decisions made \u201csolely\u201d by automated means.Regardless of how these provisions are applied or enforced, the EU\u2019s underlying intent in offering citizens the means to know why they were not hired for a job, or denied a loan or fired for posing a security risk, is more than reasonable. And with fines for non-compliance reaching up to 4 percent of a company's annual global turnover or up to \u20ac20 million (whichever is higher), what corporate leader is going to risk not complying with applicable provisions of the GDPR?There are other existing artificial intelligence-based approaches, beyond machine learning and neural nets, that companies can adopt which provide the necessary transparency not just for GDPR compliance but for any realm where a right to explanation is the norm. For example, building probabilistic models (particularly Bayesian belief networks) to represent complex problems like insider threat detection actually forces the domain experts whose wisdom and judgments are elicited to explain their reasoning up front, in full detail, before any personally identifiable information is applied. Decisions resulting from the model-based software analytics can thus be peeled back, layer by layer, to show the entire chain of reasoning and the influence of each new piece of data on the results.More broadly, as AI continues its unrelenting march into more products and services across more sectors of the global economy, protections relating to personal and data privacy have to keep pace. Or maybe it\u2019s the other way around. Which could be one explanation for the recent increase in developmental activity surrounding so-called Explainable Artificial Intelligence (XAI) systems, which the U.S. Defense Advanced Research Projects Agency claims should "have the ability to explain their rationale." What citizen, or company, wouldn\u2019t embrace that?