Generative AI use cases vary significantly across a business, as do the security risks they introduce. Credit: Gorodenkoff / Shutterstock Generative AI business use cases continue to grow as the technology bleeds into all manner of products, services, and technologies. At the same time, the security implications of evolving generative AI capabilities continue to make the headlines. A recent Salesforce survey of more than 500 senior IT leaders revealed that although the majority (67%) are prioritizing generative AI for their business within the next 18 months. Almost all admit that extra measures are needed to address security issues and equip themselves to successfully leverage the technology. Most organizations will buy (not build) generative AI, and many may not even buy generative AI directly, rather receiving it via bundled integrations. This mandates security leaders to invest time to understand the different generative AI use cases within their businesses, as well as their associated risks. A new report from Forrester has revealed the business departments most likely to adopt generative AI, their primary use cases, and the security threats and risks teams will need to defend against as the technology goes mainstream. 7 most likely generative AI business use cases According to Forrester's Securing Generative AI report, the seven most likely generative AI use cases in organizations, along with their related security threats and risks, are: Marketing: Text generators allow marketers to instantaneously produce rough drafts of copy for campaigns. This introduces data leakage, data exfiltration, and competitive intelligence threats, Forrester said. Risks include public relations/client issues related to the release of text due to poor oversight and governance processes prior to release. Design: Image generation tools inspire designers and allow them to mockup ideas with minimal time/effort, Forrester wrote. They can also be integrated into wider workflows. This introduces model poisoning, data tampering, and data integrity threats, Forrester wrote. Risks to consider are design constraints and policies not being followed due to data integrity issues and potential copyright/IP issues of generated content. IT: Programmers use large language models (LLMs) to find errors in code and automatically generate documentation. This introduces data exfiltration, data leakage, and data integrity threats, while documentation produced can risk revealing important system details that a company wouldn't normally disclose, Forrester said. Developers: TuringBots help developers write prototype code and implement complex software systems. This introduces code security, data tampering, ransomware, and IP theft issues, according to Forrester. Potential risks are unsecure code that doesn't follow SDLC security practices, code that violates intellectual property licensing requirements, or generative AI being compromised to ransom production systems. Data scientists: Generative AI allows data scientists to produce and share data to train models without risking personal information. This introduces data poisoning, data deobfuscation, and adversarial machine learning threats. The associated risk relates to the synthetic data generation model being reverse-engineered, "allowing adversaries to identify the source data used," Forrester wrote. Sales: AI generation helps sales teams produce ideas, use inclusive language, and create new content. This introduces data tampering, data exfiltration, and regulatory compliance threats. "Sales teams could violate contact preferences when generating and distributing content," Forrester said. Operations: Internal operations use generative AI to elevate their organization's intelligence. This introduces data tampering, data integrity, and employee experience threats. The risk is that data used for decision-making purposes could be tampered with, leading to inaccurate conclusions and implementations, Forrester wrote. Supply chain, third-party management important in securing generative AI While Forrester's list of most likely generative AI business use cases focuses on internal business functions, it also urged security leaders not to overlook the supplier and third-party risk element, too. "Given that most organizations will find generative AI integrated into already deployed products and services, one immediate priority for security leaders is third-party risk management," it wrote. When a company buys a product or service that includes generative AI, it depends on their suppliers to secure the solution, Forrester said. "Microsoft and Google are taking that responsibility as they bundle and integrate generative AI into services like Copilot and Workspace, but other providers will source AI solutions from their own supplier ecosystem. Security will need to compile its own set of supplier security and risk management questions based on the use cases outlined above," it added. Related content news UK government plans 2,500 new tech recruits by 2025 with focus on cybersecurity New apprenticeships and talent programmes will support recruitment for in-demand roles such as cybersecurity technologists and software developers By Michael Hill Sep 29, 2023 4 mins Education Industry Education Industry Education Industry news UK data regulator orders end to spreadsheet FOI requests after serious data breaches The Information Commissioner’s Office says alternative approaches should be used to publish freedom of information data to mitigate risks to personal information By Michael Hill Sep 29, 2023 3 mins Government Cybercrime Data and Information Security feature Cybersecurity startups to watch for in 2023 These startups are jumping in where most established security vendors have yet to go. By CSO Staff Sep 29, 2023 19 mins CSO and CISO Security news analysis Companies are already feeling the pressure from upcoming US SEC cyber rules New Securities and Exchange Commission cyber incident reporting rules don't kick in until December, but experts say they highlight the need for greater collaboration between CISOs and the C-suite By Cynthia Brumfield Sep 28, 2023 6 mins Regulation Data Breach Financial Services Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe