Americas

  • United States

Asia

Oceania

mhill
UK Editor

7 most likely generative AI business use cases and their security risks

News Analysis
Jul 13, 20234 mins
Generative AIRisk Management

Generative AI use cases vary significantly across a business, as do the security risks they introduce.

In the Conference Room Chief Engineer Presents to a Board of Scientists New Revolutionary Approach for Developing Artificial Intelligence and Neural Networks. Wall TV Shows Their Achievements.
Credit: Gorodenkoff / Shutterstock

Generative AI business use cases continue to grow as the technology bleeds into all manner of products, services, and technologies. At the same time, the security implications of evolving generative AI capabilities continue to make the headlines. A recent Salesforce survey of more than 500 senior IT leaders revealed that although the majority (67%) are prioritizing generative AI for their business within the next 18 months. Almost all admit that extra measures are needed to address security issues and equip themselves to successfully leverage the technology.

Most organizations will buy (not build) generative AI, and many may not even buy generative AI directly, rather receiving it via bundled integrations. This mandates security leaders to invest time to understand the different generative AI use cases within their businesses, as well as their associated risks.

A new report from Forrester has revealed the business departments most likely to adopt generative AI, their primary use cases, and the security threats and risks teams will need to defend against as the technology goes mainstream.

7 most likely generative AI business use cases

According to Forrester's Securing Generative AI report, the seven most likely generative AI use cases in organizations, along with their related security threats and risks, are:

  1. Marketing: Text generators allow marketers to instantaneously produce rough drafts of copy for campaigns. This introduces data leakage, data exfiltration, and competitive intelligence threats, Forrester said. Risks include public relations/client issues related to the release of text due to poor oversight and governance processes prior to release.
  2. Design: Image generation tools inspire designers and allow them to mockup ideas with minimal time/effort, Forrester wrote. They can also be integrated into wider workflows. This introduces model poisoning, data tampering, and data integrity threats, Forrester wrote. Risks to consider are design constraints and policies not being followed due to data integrity issues and potential copyright/IP issues of generated content.
  3. IT: Programmers use large language models (LLMs) to find errors in code and automatically generate documentation. This introduces data exfiltration, data leakage, and data integrity threats, while documentation produced can risk revealing important system details that a company wouldn't normally disclose, Forrester said.
  4. Developers: TuringBots help developers write prototype code and implement complex software systems. This introduces code security, data tampering, ransomware, and IP theft issues, according to Forrester. Potential risks are unsecure code that doesn't follow SDLC security practices, code that violates intellectual property licensing requirements, or generative AI being compromised to ransom production systems.
  5. Data scientists: Generative AI allows data scientists to produce and share data to train models without risking personal information. This introduces data poisoning, data deobfuscation, and adversarial machine learning threats. The associated risk relates to the synthetic data generation model being reverse-engineered, "allowing adversaries to identify the source data used," Forrester wrote.
  6. Sales: AI generation helps sales teams produce ideas, use inclusive language, and create new content. This introduces data tampering, data exfiltration, and regulatory compliance threats. "Sales teams could violate contact preferences when generating and distributing content," Forrester said.
  7. Operations: Internal operations use generative AI to elevate their organization's intelligence. This introduces data tampering, data integrity, and employee experience threats. The risk is that data used for decision-making purposes could be tampered with, leading to inaccurate conclusions and implementations, Forrester wrote.

Supply chain, third-party management important in securing generative AI

While Forrester's list of most likely generative AI business use cases focuses on internal business functions, it also urged security leaders not to overlook the supplier and third-party risk element, too. "Given that most organizations will find generative AI integrated into already deployed products and services, one immediate priority for security leaders is third-party risk management," it wrote. When a company buys a product or service that includes generative AI, it depends on their suppliers to secure the solution, Forrester said. "Microsoft and Google are taking that responsibility as they bundle and integrate generative AI into services like Copilot and Workspace, but other providers will source AI solutions from their own supplier ecosystem. Security will need to compile its own set of supplier security and risk management questions based on the use cases outlined above," it added.

mhill
UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past 8 years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author