Broader support for confidential AI use cases provides safeguards for machine learning and AI models to execute on encrypted data inside of trusted executions environments. Credit: Cryteria / Insspirito Opaque Systems has announced new features in its confidential computing platform to protect the confidentiality of organizational data during large language model (LLM) use. Through new privacy-preserving generative AI and zero-trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing, Opaque said it also now enables organizations to securely analyze their combined confidential data without sharing or revealing the underlying raw data. Meanwhile, broader support for confidential AI use cases provides safeguards for machine learning and AI models to use encrypted data inside of trusted executions environments (TEEs), preventing exposure to unauthorized parties, according to Opaque. LLM use can expose businesses to significant security, privacy risks The potential risks of sharing sensitive business information with generative AI algorithms are well-documented, as are vulnerabilities known to impact LLM applications. While some generative AI LLM models such as ChatGPT are trained on public data, the usefulness of LLMs can skyrocket if trained on an organization's confidential data without risk of exposure, according to Opaque. However, if an LLM provider has visibility into the queries set by their users, the possibility of access to very sensitive queries - like proprietary code - becomes a significant security and privacy issue as the possibility of hacking increases dramatically, Jay Harel, VP of product at Opaque Systems, tells CSO. Protecting the confidentiality of sensitive data like personally identifiable information (PII) or internal data, such as sales figures is critical for enabling the expanded use of LLMs in an enterprise setting, he adds. "Organizations want to fine-tune their models on company data, but in order to do so, they must either give the LLM provider access to their data or allow the provider to deploy the proprietary model within the customer organization," Harel says. "Additionally, when training AI models, the training data is retained regardless of how confidential or sensitive it is. If the host system's security is compromised, it may lead to the data leaking or landing in the wrong hands." Opaque platform leverages multiple layers of protection for sensitive data By running LLM models within Opaque's confidential computing platform, customers can ensure that their queries and data remain private and protected - never exposed to the model/service provider or used in unauthorized ways and only accessible to authorized parties, Opaque claimed. "The Opaque platform utilizes privacy-preserving technologies to secure LLMs, leveraging multiple layers of protection for sensitive data against potential cyber-attacks and data breaches through a powerful combination of secure hardware enclaves and cryptographic fortification," Harel says. For example, the solution allows generative AI models to run inference inside confidential virtual machines (CVMs), he adds. "This enables the creation of secure chatbots that allow organizations to meet regulatory compliance requirements." Related content news UK Cyber Security Council CEO reflects on a year of progress Professor Simon Hepburn sits down with broadcaster ITN to discuss Council’s work around cybersecurity professional standards, careers and learning, and outreach and diversity. By Michael Hill Sep 27, 2023 3 mins Government Government Government news FIDO Alliance certifies security of edge nodes, IoT devices Certification demonstrates that products are at low risk of cyberthreats and will interoperate securely. By Michael Hill Sep 27, 2023 3 mins Certifications Internet Security Security Hardware news analysis Web app, API attacks surge as cybercriminals target financial services The financial services sector has also experienced an increase in Layer 3 and Layer 4 DDoS attacks. By Michael Hill Sep 27, 2023 6 mins Financial Services Industry Cyberattacks Application Security news Immersive Labs adds custom 'workforce exercising' for each organizational role With the new workforce exercising capability, CISOs will be able to see each role’s cybersecurity readiness, risk areas, and exercise progress. By Shweta Sharma Sep 27, 2023 3 mins Security Software Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe