GDPR

What does the GDPR and the "right to explanation" mean for AI?

Security teams increasingly rely on machine learning and artificial intelligence to protect assets. Will a requirement to explain how they make decisions make them less effective?

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

"But I'm not guilty," said K., "there's been a mistake. How is it even possible for someone to be guilty? We're all human beings here, one like the other."

"That is true," said the priest, "but that is how the guilty speak."

Sound Kafkaesque? That's because it's Kafka's The Trial, a nightmare story of an innocent man caught in an inscrutable bureaucracy, condemned to this or that fate, and with no way to challenge the decisions rendered against him. Machine learning has been compared to automated bureaucracy, and European regulators are clearly concerned that unfettered proliferation of machine learning could lead to a world in which we are all K.

But what does the GDPR, the sweeping overhaul of the 1995 European Data Protection Directive that affects any company that does business with Europeans, say about machine learning and artificial intelligence? Not a lot, it turns out, prompting legal scholars to debate what rights EU citizens have under the new law--and what GDPR compliance ought to look like for global companies operating in Europe.

The debate centers on the single occurrence of the phrase "right to explanation" that occurs in Recital 71, a companion document to the GDPR that is not itself legally enforceable. However, the GDPR states that data controllers must notify consumers how their data will be used, including "the existence of automated decision-making, and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject." [our emphasis]

A common-sense reading would mean that if a computer is making real-world decisions without a human in the loop, then there should be some accountability for how those decisions are made. For example, if a bank's machine learning model denies you credit, and does so without meaningful human intervention, then, some scholars argue, the bank owes you an explanation of how it arrived at that decision.

"People are selling what are essentially black box systems," Andrew Selbst, a researcher at Data & Society in New York, says. "When things go wrong, the programmers are hiding behind a lack of transparency, saying 'nobody can understand these systems.' Ultimately, that's not going to be an acceptable answer. It's an expression of human dignity, of human autonomy, not to have decisions made about us which have no reviewability," he adds.

Despite strong arguments that a right to explanation should exist, it remains less clear whether such a right does exist under European law--and if it does exist, it likely has loopholes an autonomous truck could drive through.

To continue reading this article register now

SUBSCRIBE! Get the best of CSO delivered to your email inbox.