• United States



UK Editor

Google launches Secure AI Framework to help secure AI technology

Jun 09, 20234 mins
Generative AIIT Governance Frameworks

The SAIF is designed to help mitigate risks specific to AI systems like model theft, training data poisoning, and malicious injections.

Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they're secure-by-default. Its new framework concept is an important step in that direction, the tech giant claimed.

The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. "As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical," Google wrote in a blog.

The launch comes as the advancement of generative AI and its impact on cybersecurity continues to make the headlines, coming into the focus of both organizations and governments. Concerns about the risks these new technologies could introduce range from the potential issues of sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks.

The Open Worldwide Application Security Project (OWASP) recently published the top 10 most critical vulnerabilities seen in large language model (LLM) applications that many generative AI chat interfaces are based upon, highlighting their potential impact, ease of exploitation, and prevalence. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.

Google's SAIF built on six AI security principles

Google's SAIF builds on its experience developing cybersecurity models, such as the collaborative Supply-chain Levels for Software Artifacts (SLSA) framework and BeyondCorp, its zero-trust architecture used by many organizations. It is based on six core elements, Google said. These are:

  • Expand strong security foundations to the AI ecosystem including leveraging secure-by-default infrastructure protections.
  • Extend detection and response to bring AI into an organization's threat universe by monitoring inputs and outputs of generative AI systems to detect anomalies and using threat intelligence to anticipate attacks.
  • Automate defenses to keep pace with existing and new threats to improve the scale and speed of response efforts to security incidents.
  • Harmonize platform level controls to ensure consistent security including extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle.
  • Adapt controls to adjust mitigations and create faster feedback loops for AI deployment via techniques like reinforcement learning based on incidents and user feedback.
  • Contextualize AI system risks in surrounding business processes including assessments of end-to-end business risks such as data lineage, validation, and operational behavior monitoring for certain types of applications.

Google will expand bug bounty programs, incentivize research around AI security

Google set out the steps it is and will be taking to advance the framework. These include fostering industry support for SAIF with the announcement of key partners and contributors in the coming months and continued industry engagement to help develop the NIST AI Risk Management Framework and ISO/IEC 42001 AI Management System Standard (the industry’s first AI certification standard). It will also work directly with organizations, including customers and governments, to help them understand how to assess AI security risks and mitigate them. "This includes conducting workshops with practitioners and continuing to publish best practices for deploying AI systems securely," Google said.

Furthermore, Google will share insights from its leading threat intelligence teams like Mandiant and TAG on cyber activity involving AI systems, along with expanding its bug hunters programs (including its Vulnerability Rewards Program) to reward and incentivize research around AI safety and security, it added. Lastly, Google will continue to deliver secure AI offerings with partners like GitLab and Cohesity, and further develop new capabilities to help customers build secure systems.

UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past 8 years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author