The march of generative AI isn\u2019t short on negative consequences, and CISOs are particularly concerned about the downfalls of an AI-powered world, according to a study released this week by IBM.\n\nGenerative AI is expected to create a wide range of new cyberattacks over the next six to 12 months, IBM said, with sophisticated bad actors using the technology to improve the speed, precision, and scale of their attempted intrusions. Experts believe that the biggest threat is from autonomously generated attacks launched on a large scale, followed closely by AI-powered impersonations of trusted users and automated malware creation.\n\nThe IBM report included data from four different surveys related to AI, with 200 US-based business executives polled specifically about cybersecurity. Nearly half of those executives \u2014 47% \u2014 worry that their companies\u2019 own adoption of generative AI will lead to new security pitfalls while virtually all say that it makes a security breach more likely. This has, at least, caused cybersecurity budgets devoted to AI to rise by an average of 51% over the past two years, with further growth expected over the next two, according to the report.\n\nThe contrast between the headlong rush to adopt generative AI and the strongly held concerns over security risks may not be as large an example of cognitive dissonance as some have argued, according to IBM general manager for cybersecurity services Chris McCurdy.\n\nFor one thing, he noted, this isn\u2019t a new pattern \u2014 it\u2019s reminiscent of the early days of cloud computing, which saw security concerns hold back adoption to some degree.\n\n\u201cI\u2019d actually argue that there is a distinct difference that is currently getting overlooked when it comes to AI: with the exception perhaps of the internet itself, never before has a technology received this level of attention and scrutiny with regard to security,\u201d McCurdy said.\n\nGlobal think tanks have sprouted up to study the security implications of generative AI, he highlighted, and although there\u2019s a great deal of education that needs to happen in C-suites, organizations are generally moving in the right direction.\n\n\u201cIn other words, we\u2019re seeing that security isn\u2019t an afterthought, but a key consideration in these early days,\u201d McCurdy said.\n\nIt\u2019s important to recognize that the positive impact of generative AI on business operations has the potential to be transformative, he added. If security, to say nothing of governance and compliance, are part of the conversation from the beginning, cyber threats don\u2019t need to stand in the way of progress.\n\n\u201cThere is a lot of focus on how AI will impact organizations positively, but it\u2019s our responsibility to also consider what guardrails we have to put in place to ensure the AI models we rely on are trustworthy and secure,\u201d McCurdy said.