• United States



Has generative AI quietly ushered in a new era of shadow IT on steroids?

Jul 25, 20238 mins
CSO and CISOData and Information SecurityGenerative AI

New benefits of generative AI are being revealed daily. But will free tools such as ChatGPT also usher in a flood of users deploying a new powerful breed of shadow IT that leads to data breaches?

shadow of a hand over keyboard
Credit: Shutterstock

No matter where I travel around the world, everyone is talking about generative artificial intelligence (AI). Clearly, this is the top story in 2023 for the stock market, technology companies, security pros at the RSA Conference, and the tech industry.

It's almost impossible to keep up with the growing list of generative AI tools released and updated in 2023. Most of them have free versions accessible on the internet via a browser -- from well-known ChatGPT and Bard to hundreds of fun tools to play with to generative AI tools for developers and much more.

What I'm concerned about is not the variety, productivity gains or other numerous benefits of GenAI tools. Rather, it's whether these new tools now serve as a type of Trojan Horse for enterprises. Are end-users taking matters into their own hands by using these apps and ignoring policies and procedures on the acceptable use of non-approved apps in the process? I believe the answer for many organizations is yes.

Policies prohibiting use of generative AI likely don't even exist in some organizations, so end users may technically not even be breaking any rules. In others, enforcement of acceptable use, security, data, or privacy policies may be lax or nonexistent.

The real question is this: What are CIOs and CISOs doing about governance to manage this flood of generative AI apps coming at them as we speak? No doubt, all Executives want to be known as innovative and "enablers" of new technology that brings efficiency and other benefits.

As the Michigan CISO, I almost got fired for vetoing a WiFi project 20 years ago, so I learned this enabling lesson the hard way. In our current environment, very few leaders want to be known as being against generative AI, so how can we deal with this?

What could go wrong with generative AI as shadow IT?

Before we dive into what's happening right now in global enterprises regarding the use of generative AI tools, we need to take a short detour to address the question: what are the problems with shadow IT?

There are dozens of great studies showing the dangers that come with shadow IT. A few of the concerns include decreased control over sensitive data, an increased attack surface, risk of data loss, compliance issues, and inefficient data analysis.

Yes, there are many other security, privacy, and legal issues that can surface with shadow IT. But what concerns me the most is the astonishing growth in generative AI apps -- along with how fast these apps are being adopted for a myriad of reasons. Indeed, if the internet can best be described as an accelerator for both good and evil -- which I believe is true -- generative AI is supercharging that acceleration in both directions.

Many are saying that the adoption of generative AI apps is best compared to the early days of the internet, with the potential for unparalleled global growth. What is clear right now is that companies that offer AI opportunities are receiving the most attention and accelerating adoption. Beyond ChatGPT's record-breaking climb to 100 million users, along with 1.6 billion visits to the website in June 2023, I suspect studies will come out later this year showing rapid generative AI adoption growth across many companies, unlike anything we have ever seen before. We are talking about real game-changers.

IT leaders have genuine concerns about generative AI and security

It is clear to me that executives are now thinking more about this issue (as compared to six months ago) and the implications for their data. Concerns are growing regarding the use of free generative AI tools that may also bring licensing, copyright, legal, intellectual property, misinformation, and other concerns.

The overall challenge of managing end-user behavior is not new. Security and technology leaders have been trying to enable and support enterprise end users for decades while managing and securing data. But in a seemly never-ending tug-of-war over control regarding who sees what data, when and how, ChatGPT and other generative AI apps offer compelling new reasons to move beyond enterprise-authorized applications for completing business tasks.

If you're questioning whether generative AI apps qualify as shadow IT, as always it depends on your situation. If the application is appropriately licensed and all the data stays within the confines of your organization's secure control, generative AI can fit neatly into your enterprise portfolio of authorized apps. For example, Google sells Vertex AI, which can ensure that all public or private sector data is configured to stay within your enterprise. Similar offerings come from other companies.

Most organizations are still grappling with implementing generative AI

But purchased applications are not what I am talking about. The free version of Google Bard or OpenAI ChatGPT or other generative AI apps have their own terms and conditions that likely do not match the language preferred by your organizational lawyers. Also, how is data that is input into the system protected? Finally, it's unclear who can copyright or claim ownership of AI-generated works. Therefore, how are the results used in business processes?

I am limiting this discussion to free generative AI apps available to end users on the internet. You may be thinking, "Just buy the enterprise version license if you like a product." (For example, use Google Vertex AI rather than Google Bard.) On this point, we may agree. But many people (and companies) won't do that, at least not initially.

Put simply, it's hard to compete with free. Most organizations move slowly in acquiring new technology, and this budgeting and deployment process can take months or years. End users, who are likely already violating policies by using these free generative AI tools, are generally loath to band together and insist that the enterprise CTO (or other executives) buy new products that could end up costing millions of dollars for enterprise usage over time. That ROI may come over the next few years, but meanwhile, they experiment with free versions because everyone is doing it.

What can be done to govern data in the age of generative AI?

I want to offer some potential solutions to the issues I have raised. Some readers may be thinking, we already dealt with this shadow IT issue years ago -- this is a classic cloud access security broker (CASB) problem. To a large extent, they'd be correct. Companies such as Netskope and Zscaler, which are known for offering CASB solutions in their product suites, offer toolsets to manage enterprise policies for managing generative AI apps.

No doubt, other solutions are available that can help manage generative AI apps from top CASB vendors, and this article provides more potential CASB options. Still, these CASB solutions must be deployed and configured properly for CASB to help governance.

To be clear, CASB toolsets still do not solve all of your generative AI app issues. Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures, and more. There are also training considerations, product evaluations and business workflow management to consider. Put simply, who is researching the various options and optimizing which generative AI approaches make the most sense for your public or private sector organization or particular business area?

The stakes have never been higher in security decision-making

Nevertheless, these CASB toolsets can provide the basis for enforcing policies and procedures that are permitted for generative AI app use. My fear is that this time-consuming governance work is not being done in a thoughtful way, if at all, in many public and private sector organizations. In my view, we have entered a phase where the end users are awestruck by generative AI and will be for a (perhaps lengthy) season. They are experimenting with various free generative AI tools, that have the real potential to dramatically change their business productivity, without much thought regarding the potential negative consequences related to security, privacy and more.

History is repeating itself. Just as previous security leaders dealt with new techs like WiFi networks, cloud computing, BYOD policies and IoT devices, security pros must engage this challenge and attempt to enable the good and disable the bad regarding new generative AI technology. Who makes the decisions regarding what is and is not allowed is a business decision that each organization must work through. But one thing is clear: the stakes have never been higher.


Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist and author. During his distinguished career, Dan has served global organizations in the public and private sectors in a variety of executive leadership capacities, including enterprise-wide Chief Security Officer (CSO), Chief Technology Officer (CTO) and Chief Information Security Officer (CISO) roles in Michigan State Government. Dan was named: "CSO of the Year," "Public Official of the Year," and a Computerworld "Premier 100 IT Leader." Dan is the co-author of the Wiley book, “Cyber Mayday and the Day After: A Leader’s Guide to Preparing, Managing and Recovering From Inevitable Business Disruptions.” Dan Lohrmann joined Presidio in November 2021 as an advisory CISO supporting mainly public sector clients. He formerly served as the Chief Strategist and Chief Security Officer for Security Mentor, Inc. Dan started his career at the National Security Agency (NSA). He worked for three years in England as a senior network engineer for Lockheed Martin (formerly Loral Aerospace) and for four years as a technical director for ManTech International in a US / UK military facility. Lohrmann is on the advisory board for four university information assurance (IA) programs, including Norwich University, University of Detroit Mercy (UDM), Valparaiso University and Walsh College. Earlier in his career he authored two books - Virtual Integrity: Faithfully Navigating the Brave New Web and BYOD For You: The Guide to Bring Your Own Device to Work. Mr. Lohrmann holds a Master's Degree in Computer Science (CS) from Johns Hopkins University in Baltimore, Maryland, and a Bachelor's Degree in CS from Valparaiso University in Indiana.

More from this author