Solution uses the advanced encryption standard algorithm to encrypt sensitive data throughout the generative AI pipeline. Credit: Shutterstock / Yurchanka Siarhei Security company Baffle has announced the release of a new solution for securing private data for use with generative AI. Baffle Data Protection for AI integrates with existing data pipelines and helps companies accelerate generative AI projects while ensuring their regulated data is cryptographically secure and compliant, according to the firm. The solution uses the advanced encryption standard (AES) algorithm to encrypt sensitive data throughout the generative AI pipeline, with unauthorized users unable to see private data in cleartext, Baffle added. The risks associated with sharing sensitive data with generative AI and large language models (LLMs) are well documented. Most relate to the security implications of sharing private data with advanced, public self-learning algorithms, which has driven some organizations to ban/limit certain generative AI technologies such as ChatGPT. Private generative AI services are considered less risky, specifically retrieval-augmented generation (RAG) implementations that allow embeddings to be computed locally on a subset of data. However, even with RAG, data privacy and security implications have not been fully considered. Solution anonymizes data values to prevent cleartext data leakage Baffle Data Protection for AI encrypts data with the AES algorithm as it is ingested into the data pipeline, the firm said in a press release. When this data is used in a private generative AI service, sensitive data values are anonymized, so cleartext data leakage cannot occur even with prompt engineering or adversarial prompting, it claimed. Sensitive data remains encrypted no matter where the data may be moved or transferred in the generative pipeline, helping companies to meet specific compliance requirements -- such as the General Data Protection's (GDPR's) right to be forgotten -- by shredding the associated encryption key, according to Baffle. Furthermore, the solution prevents private data from being exposed in public generative AI services too, as personally identifiable information (PII) is anonymized. "ChatGPT has been a corporate disruptor, forcing companies to either develop or accelerate their plans for leveraging generative AI in their organizations, but data security and compliance concerns have been stifling innovation," said Ameesh Divatia, founder and CEO of Baffle. UK opens enquiry into LLMs to assess risks of generative AI Earlier this month, the UK's House of Lords Communications and Digital Committee opened an inquiry into generative AI LLMs to assess how the UK can respond to the risks the technology introduces. The issues covered include: How LLMs differ from other forms of AI and how they are likely to evolve over the next three years. The role and structure of the UK's AI Foundation Model Taskforce, its objectives, priorities, and investment plans. The appropriate role for government in responding to the opportunities and risks presented by LLMs, the adequacy of government preparedness, and priorities for action. The differences between open and closed source language models and the implications of how these are likely to develop. Related content news analysis Attackers breach US government agencies through ColdFusion flaw Both incidents targeted outdated and unpatched ColdFusion servers and exploited a known vulnerability. By Lucian Constantin Dec 06, 2023 5 mins Advanced Persistent Threats Advanced Persistent Threats Advanced Persistent Threats news BSIMM 14 finds rapid growth in automated security technology Embrace of a "shift everywhere" philosophy is driving a demand for automated, event-driven software security testing. By John P. Mello Jr. Dec 06, 2023 4 mins Application Security Network Security news Almost 50% of organizations plan to reduce cybersecurity headcounts: Survey While organizations are realizing the need for knowledgeable teams to address unknown threats, they are also looking to reduce their security headcount and infrastructure spending. By Gagandeep Kaur Dec 06, 2023 4 mins IT Jobs Security Practices feature 20 years of Patch Tuesday: it’s time to look outside the Windows when fixing vulnerabilities After two decades of regular and indispensable updates, it’s clear that security teams need take a more holistic approach to applying fixes far beyond the Microsoft ecosystem. By Susan Bradley Dec 06, 2023 6 mins Patch Management Software Threat and Vulnerability Management Windows Security Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe