Dig has made enhancements to its data security offering to enable the detection, classification, and mapping of sensitive data used to train LLM models. Credit: The KonG / Shutterstock Cloud data security provider Dig Security has added new capabilities to its Dig Data Security offering to help secure data processed through large language model (LLM) architectures used by its customers. With the new features, Dig's data security posture management (DSPM) offering would enable customers to train and deploy LLMs while upholding the security, compliance, and visibility of the data being fed into the AI models, according to the company. "Securing data is a prime concern for any organization, and the need to ensure sensitive data is not inadvertently exposed via AI models becomes more important as AI use increases," said Jack Poller, an analyst at ESG Global. "This new data security capability puts Dig in a prime position to capitalize on the opportunity." All the new capabilities will be available to Dig's existing customers within the Dig Data Security offering at launch. Dig secures data going into LLM Dig's DSPM scans every database across an organization's cloud accounts, detects, and classifies sensitive data (PII, PCI, etc.), and shows which users and roles can access the data. This helps detect whether any sensitive data is being used to train the AI models. "Organizations today struggle with both discovering data that needs to be secured and correctly classifying data," Poller said. "The problem becomes more challenging with AI as the AI models are opaque." Dig's data detection and response allows users to track data flow to understand and control the data in the AI model's training corpus, such as PII being moved into a bucket used for model training. "Once the model has been trained, it's impossible to post-process the AI model to identify and remove any sensitive data that should not have been used for training," Poller added. "Dig's new capabilities enable organizations to reduce or eliminate the risk of unwanted sensitive data being used for AI training." Dig maps data access and identifies shadow models Dig's new data access governance capabilities can highlight AI models with API access to organizational data stores, and which types of sensitive data this gives them access to. "Dig's agentless solution covers the entire cloud environment, including databases running on unmanaged virtual machines (VMs)," said Dan Benjamin, CEO and co-founder at Dig Security. "The feature alerts security teams to sensitive data stored or moved into these databases." "Dig will also detect when a VM is used to deploy an AI model or a vector database, which can store embeddings," Benjamin added. All these enhancements are made within Dig Data Security, which will combine DSPM, data loss prevention (DLP), and DDR capabilities into a single platform. Related content news Okta launches Cybersecurity Workforce Development Initiative New philanthropic and educational grants aim to advance inclusive pathways into cybersecurity and technology careers. By Michael Hill Oct 04, 2023 3 mins IT Skills Careers Security news New critical AI vulnerabilities in TorchServe put thousands of AI models at risk The vulnerabilities can completely compromise the AI infrastructure of the world’s biggest businesses, Oligo Security said. By Shweta Sharma Oct 04, 2023 4 mins Vulnerabilities news ChatGPT “not a reliable” tool for detecting vulnerabilities in developed code NCC Group report claims machine learning models show strong promise in detecting novel zero-day attacks. By Michael Hill Oct 04, 2023 3 mins DevSecOps Generative AI Vulnerabilities news Google Chrome zero-day jumps onto CISA's known vulnerability list A serious security flaw in Google Chrome, which was discovered under active exploitation in the wild, is a new addition to the Cybersecurity and Infrastructure Agency’s Known Exploited vulnerabilities catalog. By Jon Gold Oct 03, 2023 3 mins Zero-day vulnerability Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe