AI is coming to desktops everywhere \u2014 is your security team ready for it? Beginning with its October security updates, Microsoft has begun a staged rollout of built-in artificial intelligence in the form of Copilot for Windows.\n\nBut before leaping to integrate Copilot into your systems, it\u2019s essential to review policies and procedures that your company or organization has governing the use of artificial intelligence. If you don\u2019t have such policies in place, now would be a good time to consider implementing them.\n\nWhen and how will Copilot be enabled?\n\nIf your Windows 11 22H2 desktops are in a managed setting \u2014 that is, they all have their updates controlled by Windows Software update services, Intune, or Windows update for business \u2014 Copilot will not be enabled.\n\nIf you are in the European Union, Copilot will not be enabled. If your desktops are patched by Windows update and get the October security release installed, they may start to see the Copilot module. Based on the Bing chat module, it allows the user to ask questions and obtain answers.\n\nCopilot enabling also depends on the user having either a Microsoft account or an Entra (formerly Azure Active Directory) account. Copilot will also not be enabled for users with only a local account or an active directory-based domain.\n\nHow Microsoft\u2019s AI has changed\n\nSmall changes have already been observed in the behavior of Microsoft\u2019s AI component. In early testing, Microsoft was including links to advertisements in the chat windows but recently I\u2019ve noticed that they are no longer including ads in responses. Clearly, changes are being made as feedback is received.\n\nAlready we\u2019ve seen studies reviewing the security of GitHub Copilot\u2019s Code contributions. A paper published by researchers at Cornell University last August reviewed the impact of using AI in code and how secure or how vulnerable that code is if you rely on developers using Github to augment their coding skills.\n\nThe paper indicates that \u201cas the user adds lines of code to the program, Copilot continuously scans the program and periodically uploads some subset of lines, the position of the user\u2019s cursor, and metadata before generating some code options for the user to insert.\u201d\n\nThe AI generates code that is functionally relevant to the program as implied by comments, docstrings, and function names, the paper states. \u201cCopilot also reports a numerical confidence score for each of its proposed code completions, with the top-scoring (highest-confidence) score presented as the default selection for the user. The user can choose any of Copilot\u2019s options.\u201d\n\nCopilot-generated code can create vulnerabilities\n\nThe study found that upon testing 1,692 programs generated in 89 different code-completion scenarios, 40% were found to be vulnerable. As the authors indicated, \u201cwhile Copilot can rapidly generate prodigious amounts of code, our conclusions reveal that developers should remain vigilant ('awake') when using Copilot as a co-pilot. Ideally, Copilot should be paired with appropriate security-aware tooling during both training and generation to minimize the risk of introducing security vulnerabilities.\u201d\n\nUltimately you need to start thinking and planning about your firm\u2019s implementations of any and all AI modules that will arrive in your operating systems, in your API implementations, or in your code. The use of AI doesn\u2019t mean that the application or code is vetted by default \u2014 rather, it\u2019s just a different type of input that you need to review and manage.\n\nIn the case of Microsoft AI inputs that are coming to desktops and applications, some, like Copilot for Windows, come as native to the platform, without additional costs, and may be managed with Group Policy, Intune, or other management tools. Once you have deployed the October security updates to a sample Windows 11 22H2 workstation, an IT department can proactively manage Copilot in Windows by using the group policy or Intune tools noted here.\n\nCopilot for Microsoft 365 is on the horizon\n\nNote that what is rolling out in October is not the more impactful AI offering, that of Copilot for Microsoft 365. That offering will be included in Microsoft 365 and is expected to be priced at an additional $30 per user per month. For this, the AI will be embedded in email, Word documents, and Excel files.\n\nThus, you\u2019ll need to set boundaries and review who has what permissions to access information and documents in your network. If there is outdated information it\u2019s relying on, that Copilot-based output will be less than ideal. \n\nIn addition to the normal end-user security training that you provide now to your staff, ensure that you implement AI awareness and ensure that any private or sensitive information is not included in the chat or AI input windows.\n\nFor example, a financial investment firm that may have fully implemented Copilot for Windows and Microsoft 365 Copilot would need to add security awareness training to ensure that sensitive financial information does not get used (and potentially exposed) in the input windows.\n\nEU workstations will not be able to implement Copilot\n\nIf you work in the EU, be aware that the Digital Markets Act (DMA) has mandated that Copilot may not be implemented. Thus, workstations with EU locals will not receive the Copilot implementation. The DMA requires that companies cannot monopolize the marketplace and give equal and fair chances to other local companies.\n\nMicrosoft is working on a specific version that will work in the EU and abide by its laws and regulations. If you are located in the EU and your IT staff want to test out and review the upcoming release, you can launch Copilot by building a shortcut to this link.\n\nUsing the Copilot feature is not illegal or restricted if you are in the EU, so you may wish to test it out, review the implications, and set your firm's policy now before it\u2019s released in a future update. It\u2019s expected to be fully implemented in the 23H2 release later this year.\n\nThe bottom line is that you\u2019ll want to test and review these AI solutions being delivered to your doorstep as they are coming to your network faster than you think. Ensure your security teams have written policies, reviewed the impact, and considered where AI will be a help to your network and where it should not be implemented. Artificial intelligence, when implemented correctly can be a help to your endeavors. When it\u2019s not, it can be a security nightmare. Plan now before it arrives on your desktop.