How secure are your AI and machine learning projects?

Artificial intelligence and machine learning bring new vulnerabilities along with their benefits. Here's how several companies have minimized their risk.

When enterprises adopt new technology, security is often on the back burner. It can seem more important to get new products or services to customers and internal users as quickly as possible and at the lowest cost. Good security can be slow and expensive.

Artificial intelligence (AI) and machine learning (ML) offer all the same opportunities for vulnerabilities and misconfigurations as earlier technological advances, but they also have unique risks. As enterprises embark on major AI-powered digital transformations, those risks may become greater than what we've seen before.

AI and ML require more data, and more complex data, than other technologies. The algorithms used have been developed by mathematicians and data scientists and come out of research projects. Meanwhile, the volume and processing requirements mean that the workloads are typically handled by cloud platforms, which add yet another level of complexity and vulnerability.

High data demands leaves much unencrypted

AI and ML systems require three sets of data. First, training data, so that the company can build a predictive model. Second, testing data, to find out how well the model works. Finally, live transactional or operational data, for when the model is put to work.

To continue reading this article register now

Get the best of CSO ... delivered. Sign up for our FREE email newsletters!