Artificial intelligence continues to snare the technological limelight and, rightly so as we move well into the final quarter of 2023, there is wide international interest in harnessing the power of AI. But with the excitement and anticipation come some appropriate notes of caution from governments around the world, concerned that all of AI\u2019s promise and potential has a dark flipside: It can be used as a tool by bad actors just as easily as it can by the good guys.\n\nThus, on October 30, 2023, US President Joe Biden issued the \u201cExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence\u201d while contemporaneously the G-7 Leaders issued a joint statement in support of the May 2023 \u201cHiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.\u201d The US executive order also references the anticipated November UK Summit on AI Safety, which will bring together world leaders, technology companies and AI experts to \u201cfacilitate a critical conversation on artificial intelligence.\u201d\n\nUnderstanding how AI will affect the CISO\u2019s role is key\n\nAmid the cacophony of international voices trying to bring order to what many see as chaos, it is important for CISOs to understand how AI and machine learning are going to affect their role and their abilities to thwart, detect, and remediate threats. Knowing what the new policy moves entail is critical to gauging where responsibility for dealing with the threats will lie and provides insight into what these governmental bodies believe is the way forward.\n\nCISOs will be well served to ensure they have visibility into the various working groups and advisory boards (e.g., AISSB) as they support their entity\u2019s evolution and adoption of AI\/ML tools. In addition, given the fluid nature of the global initiatives, the lack of harmonization across borders is a reality and could cause downstream compliance issues if guidance and regulations differ within regions or by country.\n\nThe US executive order on AI\n\nThe US executive order builds on prior White House engagement on AI and provides guidelines for industry and the government. Those entities that have a national security footprint should be especially attentive to the dual-use possibilities of AI technologies. The executive order points to seven important areas:\n\nGovernment agencies on the front lines of AI regulation\n\nThe National Institute of Standards and Technology (NIST) has a herculean task, which it characterized as an \u201copportunity\u201d on social media: \u201cAI provides tremendous opportunity, but we also must manage the risks. The [executive order] directs NIST to develop guidelines & best practices to promote consensus industry standards that help ensure the development & deployment of safe, secure & trustworthy AI.\u201d\n\nMeanwhile, the White House Office of the National Cyber Director characterized its understanding of the executive order on social media with precision: \u201cToday's EO establishes new standards for AI safety and security, the protection of Americans\u2019 privacy, the advancement of equity and civil rights -- it stands up for consumers and workers, promotes innovation & competition, advances American leadership around the world.\u201d\n\nThe US Department of Homeland Security put out its own fact sheet explaining the executive order and its responsibilities, highlighting key areas:\n\nSeparately, the Cybersecurity and Infrastructure Security Agency emphasized in its own social media post that it will \u201cassess possible risks related to the use of AI, provide guidance to the critical infrastructure sectors, capitalize on AI\u2019s potential to improve US cyber defenses, and develop recommendations for red-teaming generative AI.\u201d\n\nAssessing the AI threat to intellectual property\n\nThe threat to intellectual property is not hypothetical and is front and center within the executive order. To bolster the protection of AI-related intellectual property, DHS, through the National Intellectual Property Rights Coordination Center \u201cwill create a program to help AI developers mitigate AI-related risk, leveraging Homeland Security Investigations, law enforcement, and industry partnerships.\n\nWhile industry, in the form of IBM, chimed in with the admonishment that the \u201cbest way to address potential AI safety concerns is through open innovation. A robust open-source ecosystem with a diversity of voices -- including creators, developers, and academics -- will help rapidly advance the science of AI safety and foster competition in the marketplace.\u201d \n\nIt\u2019s now been a year since ChatGPT stormed into consumer hands and the past 12 months have been nothing short of whirlwind adoption. CISOs must, as recommended previously, ask the hard questions, and demand provenance and demonstratable test results from providers who espouse the inclusion of AI\/ML in their products. While the global government initiatives are pointed in the right direction, it\u2019s clear that it will ultimately fall on the CISO\u2019s shoulders to determine if the arrows in their quiver are the right ones.