How secure are your AI and machine learning projects?

Artificial intelligence and machine learning bring new vulnerabilities along with their benefits. Here's how several companies have minimized their risk.

cso ts ai ml by just super getty images 2400x1600
Just_Super / Getty Images

When enterprises adopt new technology, security is often on the back burner. It can seem more important to get new products or services to customers and internal users as quickly as possible and at the lowest cost. Good security can be slow and expensive.

Artificial intelligence (AI) and machine learning (ML) offer all the same opportunities for vulnerabilities and misconfigurations as earlier technological advances, but they also have unique risks. As enterprises embark on major AI-powered digital transformations, those risks may become greater. "It's not a good area to rush in," says Edward Raff, chief scientist at Booz Allen Hamilton.

AI and ML require more data, and more complex data, than other technologies. The algorithms developed by mathematicians and data scientists come out of research projects. "We're only recently as a scientific community coming to understand that there are security issues with AI," says Raff.

The volume and processing requirements mean that cloud platforms often handle the workloads, adding another level of complexity and vulnerability. It's no surprise that cybersecurity is the most worrisome risk for AI adopters. According to a Deloitte survey released in July 2020, 62% of adopters see cybersecurity risks as a major or extreme concern, but only 39% said they are prepared to address those risks.

Compounding the problem is that cybersecurity is one of the top functions for which AI is being used. The more experienced organizations are with AI, the more concerned they are about cybersecurity risks, says Jeff Loucks, executive director of Deloitte's Center for Technology, Media and Telecommunications.

To continue reading this article register now

Microsoft's very bad year for security: A timeline