Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they\u2019re secure-by-default. Its new framework concept is an important step in that direction, the tech giant claimed.\n\nThe SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. \u201cAs AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical,\u201d Google wrote in a blog.\n\nThe launch comes as the advancement of generative AI and its impact on cybersecurity continues to make the headlines, coming into the focus of both organizations and governments. Concerns about the risks these new technologies could introduce range from the potential issues of sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks.\n\nThe Open Worldwide Application Security Project (OWASP) recently published the top 10 most critical vulnerabilities seen in large language model (LLM) applications that many generative AI chat interfaces are based upon, highlighting their potential impact, ease of exploitation, and prevalence. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.\n\nGoogle\u2019s SAIF built on six AI security principles\n\nGoogle\u2019s SAIF builds on its experience developing cybersecurity models, such as the collaborative Supply-chain Levels for Software Artifacts (SLSA) framework and BeyondCorp, its zero-trust architecture used by many organizations. It is based on six core elements, Google said. These are:\n\nGoogle will expand bug bounty programs, incentivize research around AI security\n\nGoogle set out the steps it is and will be taking to advance the framework. These include fostering industry support for SAIF with the announcement of key partners and contributors in the coming months and continued industry engagement to help develop the NIST AI Risk Management Framework and ISO\/IEC 42001 AI Management System Standard (the industry's first AI certification standard). It will also work directly with organizations, including customers and governments, to help them understand how to assess AI security risks and mitigate them. \u201cThis includes conducting workshops with practitioners and continuing to publish best practices for deploying AI systems securely,\u201d Google said.\n\nFurthermore, Google will share insights from its leading threat intelligence teams like Mandiant and TAG on cyber activity involving AI systems, along with expanding its bug hunters programs (including its Vulnerability Rewards Program) to reward and incentivize research around AI safety and security, it added. Lastly, Google will continue to deliver secure AI offerings with partners like GitLab and Cohesity, and further develop new capabilities to help customers build secure systems.