The United Kingdom\u2019s National Cyber Security Center (NCSC) recently issued a warning to its constituents on the threat posed by artificial intelligence (AI) to the national security of the UK. This was followed shortly by a similar warning from NSA cybersecurity director Rob Joyce. It is clear there is great concern from many nations surrounding the challenges and threats posed by AI.\n\nTo get a more rounded view of the dangers of bad actors using AI to infiltrate or attack nation-states, I reached out to the industry and found thoughts and opinions, and frankly, some who opted out of the discussion, at least for now.\n\nThe NCSC warned that queries are archived and thus could become part of the underlying large language model (LLM) of AI chatbots such as ChatGPT. Such queries could reveal areas of interest to the user and by extension the organization to which they belong. Joyce at the NSA opined that ChatGPT and its ilk will make cybercriminals better at their jobs, especially with the ability of a chatbot to improve phishing verbiage, making it sound more authentic and believable to even sophisticated targets.\n\nSecret leakage through queries\n\nAs if on cue, Samsung revealed that it had admonished its workforce to use ChatGPT functionality with care. An employee wished to optimize a confidential and sensitive product design and let the AI engine do its thing \u2014 it worked, but also left a trade secret behind and ultimately inspired Samsung to begin developing its own ML software for internal use only.\n\nSpeaking about the Samsung incident, CODE42 CISO Jadee Hanson observed that despite its promising, advancements, the explosion of ChatGPT has ignited many new concerns regarding potential risks. \u201cFor organizations, the risk intensifies as any employee feeds data into ChatGPT,\u201d she tells CSO.\n\n\u201cChatGPT and AI tools can be incredibly useful and powerful, but employees need to understand what data is appropriate to be put into ChatGPT and what isn\u2019t, and security teams need to have proper visibility to what the organization is sending to ChatGPT. With all new powerful technology advances, there come risks that we need to understand to protect our organizations.\u201d\n\nIn a word, once you hit \u201center\u201d the information is gone and no longer under your control. If the information was considered a trade secret, this action may be enough for it to be declared a secret no more. Samsung observed that \u201csuch data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question.\u201d \n\nIt is not difficult to extrapolate how such queries originating from within a government, especially the classified information side of government, could put national security at risk.\n\nAI changes everything\n\nEarlier in 2023, Dr. Jason Matheny, president and chief executive officer of RAND Corporation, outlined the four prime areas that his organization saw as national security concerns in testimony before the Homeland Security and Governmental Affairs committee.\n\nIt is not hyperbole or exaggeration to state that AI will change everything.\n\nThe rising fear of AutoGPT\n\nI had a wide-ranging discussion with Ron Reiter, CTO of Sentra (who previously served within Unit 8200, within the Israeli National Defense Force), in which he commented that his primary fear will be found with the advent of AutoGPT or AgentGPT, AI entities that could deploy with the GPT engine operating as a force multiplier \u2014 improving attack efficiency not by a hundredfold but by many thousandfold. An adversary gives AutoGPT the task and internet connectivity and the machine goes and goes (think the Energizer Bunny) until completion. In other words, malware operates on its own. With AutoGPT, the adversary has a tool that can be both persistent and scaled.\n\nReiter is not alone. Patrick Harr, CEO of SlashNext, offered that hackers are using ChatGPT to deliver a higher volume of unique, targeted attacks faster, creating a higher likelihood of a successful compromise. \u201cThere are two areas where chatbots are successful today: malware and business email compromise (BEC) threats,\u201d Harr says. \u201cCyberattacks are most dangerous when delivered with speed and frequency to specific targets in an organization.\u201d\n\nCreating infinite code variations\n\nChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines,\u201d Harr says. \u201cBEC attacks are targeted attempts to social engineer a victim into giving valuable financial information or data. These attacks require personalized messages to be successful. ChatGPT can now create well-written, personal emails en masse with infinite variations. The speed and frequency of these attacks will increase and yield a higher success rate of user compromises and breaches, and there has already been a significant increase in the number of breaches reported in the first quarter of 2023.\u201d\n\nAdditionally, Reiter noted, the ability of chatbots to mimic humans is very real. One should expect entities such as the Internet Research Agency, long associated with Russian active measures, specifically misinformation and disinformation, to be working overtime to evolve capabilities to capture a specific individual\u2019s tone, tenor, and syntax. The target audience may know that such is possible, but when confronted with content from the real individual and mimicked content, who are they going to believe? Trust is at stake.\n\nHarr emphasized that it will take security powered by similar machine learning to mitigate the problem: \u201cYou have to fight AI with AI.\u201d\n\nShould the world pause AI tool development?\n\nWarnings from security agencies around the world would seem to align with an open letter signed by many who have a dog in the hunt that calls for a pause on the development of AI tools. But it would seem to be too late for that, as evidenced by a recent US Senate Armed Forces Committee hearing on the state of artificial intelligence and machine learning applications in improving Department of Defense operations at which the consensus was that a pause by the United States would be deleterious to the national security of the country.\n\nThose testifying, RAND\u2019s Matheny, Palantir CTO Mr. Shyam Sankar, and Shift5 co-founder and CEO Josh Lospinoso, agreed that the United States currently enjoys an advantage and such a pause would give adversarial nations an opportunity to catch up and create AI models against which the US would be hard pressed to defend itself. That said, there was a universal call for controls to be placed on AI technology from those testifying, as well as a bipartisan agreement within the subcommittee.\n\nThe subcommittee called for the three to collaborate with others of their choosing and return in 30 to 60 days with recommendations on how the government should be looking at regulating AI within the context of protecting national security. One senses, from the conversations during the April 19 hearing, that one may expect AI technologies to be designated as dual-use technologies and fall under ITAR (International Traffic in Arms Regulations), which doesn\u2019t prohibit international collaboration or sharing, yet requires the government to have a say.