Cloud data security provider Dig Security has added new capabilities to its Dig Data Security offering to help secure data processed through large language model (LLM) architectures used by its customers.\n\nWith the new features, Dig\u2019s data security posture management (DSPM) offering would enable customers to train and deploy LLMs while upholding the security, compliance, and visibility of the data being fed into the AI models, according to the company.\n\n\u201cSecuring data is a prime concern for any organization, and the need to ensure sensitive data is not inadvertently exposed via AI models becomes more important as AI use increases,\u201d said Jack Poller, an analyst at ESG Global. \u201cThis new data security capability puts Dig in a prime position to capitalize on the opportunity.\u201d\n\nAll the new capabilities will be available to Dig\u2019s existing customers within the Dig Data Security offering at launch.\n\nDig secures data going into LLM\n\nDig\u2019s DSPM scans every database across an organization\u2019s cloud accounts, detects, and classifies sensitive data (PII, PCI, etc.), and shows which users and roles can access the data. This helps detect whether any sensitive data is being used to train the AI models.\n\n\u201cOrganizations today struggle with both discovering data that needs to be secured and correctly classifying data,\u201d Poller said. \u201cThe problem becomes more challenging with AI as the AI models are opaque.\u201d\n\nDig\u2019s data detection and response allows users to track data flow to understand and control the data in the AI model\u2019s training corpus, such as PII being moved into a bucket used for model training.\n\n\u201cOnce the model has been trained, it\u2019s impossible to post-process the AI model to identify and remove any sensitive data that should not have been used for training,\u201d Poller added. \u201cDig\u2019s new capabilities enable organizations to reduce or eliminate the risk of unwanted sensitive data being used for AI training.\u201d\n\nDig maps data access and identifies shadow models\n\nDig\u2019s new data access governance capabilities can highlight AI models with API access to organizational data stores, and which types of sensitive data this gives them access to.\n\n\u201cDig\u2019s agentless solution covers the entire cloud environment, including databases running on unmanaged virtual machines (VMs),\u201d said Dan Benjamin, CEO and co-founder at Dig Security. \u201cThe feature alerts security teams to sensitive data stored or moved into these databases.\u201d\n\n\u201cDig will also detect when a VM is used to deploy an AI model or a vector database, which can store embeddings,\u201d Benjamin added. All these enhancements are made within Dig Data Security, which will combine DSPM, data loss prevention (DLP), and DDR capabilities into a single platform.