Three-quarters of global businesses are currently implementing or considering bans on ChatGPT and other generative AI applications within the workplace, with risks to data security, privacy, and corporate reputation driving decisions to act. That\u2019s according to new research from Blackberry which found that 61% of companies deploying\/considering generative AI bans view the steps long term or permanent. Blackberry\u2019s findings draw from a survey of 2,000 IT decision makers across North America (USA and Canada), Europe (UK, France, Germany, and the Netherlands), Japan, and Australia.\n\nThe data come a week after the publication of the OWASP Top 10 for LLMs which details the key security and safety challenges associated with large language models (LLMs), which many generative AI chatbots are built on. It also comes as organizations are facing up to the reality of needing to implement specific generative AI security policies amid the skyrocketing growth and adoption of the technology within businesses. One key question on many people\u2019s minds is the extent to which generative AI is ushering in a new era of shadow IT.\n\nSecurity concerns driving generative AI bans\n\nDespite the majority of the IT decision makers surveyed recognizing the opportunity for generative AI applications in the workplace to increase efficiency (55%) and innovation (52%), and enhance creativity (51%), 83% voiced concerns that unsecured generative AI apps pose a cybersecurity threat to their corporate IT environment, driving inclination towards complete bans, according to Blackberry. What\u2019s more, while 81% of respondents are in favor of using generative AI tools for cybersecurity defense to avoid being caught flat-footed by cyber criminals, 80% believe organizations are within their rights to control the applications that employees use for business purposes.\n\nOrganizations should take a cautious, yet dynamic approach to generative AI applications in the workplace, said Shishir Singh, CTO cybersecurity at BlackBerry. \u201cBanning generative AI applications in the workplace can mean a wealth of potential business benefits are quashed. As platforms mature and regulations take effect, flexibility could be introduced into organizational policies. The key will be in having the right tools in place for visibility, monitoring and management of applications used in the workplace.\u201d\n\nCISOs must develop generative AI policies that tackle risk without stifling innovation\n\nAppropriate, business-aligned security policies controlling the use of generative AI should be high on the CISO\u2019s agenda right now. The challenge for CISOs is to develop cybersecurity policies that not only embrace and support business adoption of this technology but effectively address risk without stifling innovation. Any who think they can put this off for a year or two to see how generative AI develops, hoping to retrofit a security policy appropriate for generative AI\u2019s pervasiveness later down the line, should carefully consider what happened with shadow IT. Businesses were slow off the mark from a security policy perspective to deal with personal technology when it began being used for corporate activities.