A large language model (LLM) AI assistant designed to work like a website chatbot and help users with third-party risk management tasks is now available from TPRM vendor Prevalent. The idea behind the new tool, dubbed Alfred, is to guide users through common risk assessment and management issues on which they may have limited in-house, human expertise, reducing decision-making time and improving decision accuracy.\n\nBehind the scenes, Alfred is based on generative AI technology from Microsoft-backed OpenAI, using generalized data on risk events and observations to generate accurate information about a given customer\u2019s risk profile. The company said that all data is anonymized, and that Alfred\u2019s guidance is couched in industry standards like NIST, ISO and SOC2. The AI is integrated into Prevalent\u2019s existing TPRM solution, in a way designed to be seamless for existing users.\n\nPrevalent said in a news release that the AI outputs are continually audited and reviewed for accuracy, and that the data used to train it has been \u201cvalidated by over 20 years of industry experience.\u201d\n\nBrad Hibbert, COO and CSO at Prevalent, said that the company\u2019s clientele has expressed curiosity about the use of AI in risk assessment, despite a natural caution. Prevalent has, therefore, adopted what Hibbert called a \u201cuse case-driven approach.\u201d\n\n\u201cIt\u2019s important to note that AI-related capabilities have been included as features in the Prevalent platform for some time now,\u201d he said. \u201c[Along with] ML analytics and NLP document analysis, but this is the first conversational\/generative AI capability.\u201d\n\nWhile Alfred\u2019s underlying decision-making is not, as yet, dependent on customer-provided information, Hibbert said that the user interface and workflow was designed in part around lessons learned from consumer input. He also noted that the company plans additional generative AI features for its platform, including enhanced security artifact review and automated assessment population (essentially filling out complex security forms), but that those were not yet available.\n\n\u201cOur development approach continues to focus on solving customers\u2019 real problems,\u201d Hibbert said. \u201cAlfred solves the problem of not having the context or the skilled resources to understand what a risk means, and what to do about it.\u201d\n\nAlfred is available for use to all Prevalent platform customers as of now, at no additional charge.\n\nThe software joins a wave of AI-based tools being added to security products from a wide range of vendors. Just this week, AuditBoard added new AI and analytics capabilities for risk and compliance and last week,\u00a0Vanta announced that it had baked generative AI into its core security and compliance product. Some of the largest tech vendors are also incorporating generative AI into their security offerings. In March, for example, Microsoft announced its generative AI Security Copilot, a GPT-4 implementation.