It seems that everyone is rushing to embed artificial intelligence into their solutions, and security offerings are among the latest to obtain this shiny new thing. Like many, I see the potential for AI to help bring about positive change, but also its potential as a threat vector.To some, recent AI developments are a laughing matter. On April 1, 2023, that traditional day when technology and social media sites love to pull a fast one on us and engage in often elaborate pranks, the Twitter account for the MITRE ATT&CK platform launched the #attackgpt Twitter bot, which invited users to employ the hashtag #attackgpt, which would generate an \u201cAI\u201d response to questions about the anti-hacker knowledge base. In reality, it was an April fool\u2019s prank with MITRE\u2019s social media team cranking out funny answers in the guise of a chatbot.For many, the rise of the AI chatbots is no joke. The risks of abuse inherent in the deployment of artificial intelligence are nothing new to CISOs \u2014 companies have begun to establish whole divisions that promise to ensure that AI follows ethical principles.I have a deeper concern: What if the information a security bot provides is just dead wrong? In cybersecurity, it often takes several resources and researchers to come to a conclusion regarding the risk of a security vulnerability. If an AI does not know about the latest threats or vulnerabilities, its contribution to security is flawed and could leave the user exposed.The first assessment is often not the right oneToo often in this era of clickbait journalism, I will see overbroad or flat-out wrong articles about security that indicate an issue is more widespread than it is, or an attack is more widespread than it turns out to be. Intrusions are more than likely hitting specific targets and not whole industries these days, but you wouldn\u2019t know that based on the headlines. If that\u2019s where an AI is getting its input, the output is going to be just as wrong.Often with technology decisions, the first determination of a security problem may not be the correct one. Case in point were the 2021 headlines regarding a cyberattack involving a Florida water treatment plant that had many concerned that attackers could remotely control water systems and damage or poison water systems with too many chemicals. The attacks led to press conferences and even alerts from CISA regarding the potential for attacks. It turns out that the root cause was human behavior and bad security hygiene. Rather than a remote cyberattack, the bad actors got in through \u201cby exploiting cybersecurity weaknesses, including poor password security, and an outdated operating system.\u201dIf ChatGPT based its recommendations and guidance on the initial published security information at the time, it would have told users that a remote attacker was at fault and would provide guidance for protecting systems from remote attacks. In reality, and buried in a long-forgotten follow-up, was the fact that the issue was really the human factor. Instead of actionable guidance on how to investigate human failure, you would receive guidance that, while informative, would not suggest the actual actions required to protect from the real threat at hand.Why accurate AI information matters for Microsoft usersThe current implementation of GPT-4, the latest version of ChatGPT on which Microsoft\u2019s artificial intelligence is based, is not aware of current news and information. If you ask it about the Outlook zero-day vulnerability that was patched in the March Office updates, the official ChatGPT interface indicates that it doesn\u2019t know about this recent security issue, generating this response:"I'm sorry, but as of my knowledge cutoff date of 2021, there is no information available about CVE-2023-23397. This is likely because it is a hypothetical CVE that has not yet been discovered or assigned.It is important to note that CVEs are unique identifiers assigned to publicly disclosed cybersecurity vulnerabilities, so it is unlikely that a CVE would be assigned to a vulnerability in the future without it first being discovered and publicly disclosed."If you have more information about this CVE, please provide it, and I will do my best to assist you."If you ask the AI of Microsoft\u2019s Bing search engine about this issue, you receive a better response but still not as precise as can be. It says:\u201cCVE-2023-23397 is a critical elevation of privilege vulnerability in Microsoft Outlook on Windows. It is exploited when a threat actor delivers a specially crafted message to a user1. A successful exploit of this vulnerability can result in unauthorized access to an organization\u2019s resources. Microsoft has released a patch for this vulnerability and recommends that users apply it immediately. If you think you may have been targeted or compromised by threat actors exploiting CVE-2023-23397, Microsoft has provided guidance for investigating attacks using this vulnerability.\u201dWhere AI security guidance falls shortHere\u2019s where I feel the guidance provided by the AI falls down. If you ask Bing AI how to apply the CVE-2023-23397 patch, it states:\u201cTo apply the patch for CVE-2023-23397, you must install the Outlook security update, regardless of where your mail is hosted (e.g., Exchange Online, Exchange Server, some other platform). If you have a build number equal to or higher than what is shown below, you are good to go. If your build number is lower, then you need to patch via Windows Update. To find your build number, open the application (Outlook, for example) and click on File, Office Account. The build should be towards the middle of your window.\u201dThe platform does not detail that for many people the Outlook patch is received silently and in the background. Chances are, if they have Microsoft 365 Office installation installed, the security update comes out in the background sometime after the second Tuesday of the month. If you are concerned about systems not receiving click-to-run updates, my recommendation is to use a patch management tool to monitor the versions and build numbers of Office 365 deployments in your office. An alternative means to ensure you are up to date is to deploy a script to workstations to have them check once a week that the 365 deployments are up to date or on an as-needed basis.AI is just not expert enoughFor Microsoft 365 there is no longer a \u201cpatch\u201d to install, rather the installation process is in the background and silent to the end user. Suddenly the patches are just there. Only if you have an older installation platform that uses MSI installers do you receive a patch on the second Tuesday of the month. Thus, my concern about the use of AI is that it lacks the exactitude for proper security guidance but will provide more general information that doesn\u2019t provide sufficiently actionable content. In short, it\u2019s wrong and will not lead to a good outcome.Artificial intelligence can enhance the best \u2014 and the worst \u2014 of human behavior. It can provide us with actionable information or base its findings on inaccurate conclusions based on assumptions it gathers from incorrect conclusions. Microsoft\u2019s Security Copilot, which will include AI, has so far merely been discussed and has yet to be released. You can rest assured that I\u2019ll be interested to see if it can gather the best, most up-to-date security guidance and cull out the worst.