Artificial intelligence has zoomed to the forefront of the public and professional discourse \u2014 as have expressions of fear that as AI advances, so does the likelihood that we will have created a variety of beasts that threaten our very existence. Within those fears also lay worries about the responsibilities of those who create the large language models (LLM) and engines that harvest the data that feed them to do so in an ethical manner.To be frank, I hadn\u2019t given the matter much thought until I was triggered by a recent discussion around the need for \u201cresponsible and ethical AI\u201d which occurred amidst the constant blast that AI is evil personified or conversely that it is some holy grail.I went away and began digging in and found the US Department of Defense (DoD) has a framework that it has used and shared publicly since early 2020 that comprises five principles that lay out what artificial intelligence should look like:Responsible \u2014 Exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.Equitable \u2014 Take deliberate steps to minimize unintended bias in AI capabilities.Traceable \u2014 AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedures and documentation.Reliable \u2014 AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.Governable \u2014 Design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.Demonstrating a significant amount of prescience, Air Force Lt. General Jack Shanahan, then head of the Joint Artificial Intelligence Center (since integrated into the Chief Digital and Artificial Intelligence Office in 2022 led by Dr. Craig Martell, chief digital and artificial intelligence officer) noted in the context of the military\u2019s use of AI in support of the warfighter, that \u201cwhether it does so positively or negatively depends on our approach to adoption and use. The complexity and the speed of warfare will change as we build an AI-ready force of the future. We owe it to the American people and our men and women in uniform to adopt AI ethics principles that reflect our nation's values of a free and open society."In late 2021, the DoD published its Project Herald, which outlines the Defense Intelligence Digital Transformation Campaign Plan \u2013 2022-2027. The plan embraces the aforementioned pillars of responsible AI and aligns perfectly with what every CISO should be addressing within their remit: people, process, and technology.So here we are in 2023, and the White House has joined in with a plethora of steps all designed to help foster the evolution of responsible AI, and not a moment too soon. In early May, the administration announced the creation of additional National AI Research Institutes (and $140 million to make it happen). The seven new institutes will join the 18 existing entities all focused on AI research.\u00a0The actions taken by the executive branch of the US government, coupled with its clear understanding that AI is a national security issue, should be easily translated by the CISO that AI is also a priority corporate security issue.CISOs should embrace the DoD framework on AIHow does this distill down to actionable elements which will assist the CISO who is looking at the ad copy being thrown over their transom from marketeers and trying to determine what exists and what is infamous vaporware? I submit that the CISO should take this DoD framework and run with it in their evaluation of what is being considered for inclusion in their network.Responsible \u2013 Ensure both training and playbooks exist that will assist their personnel in the implementation of AI-based solutions in their technology stack. \u00a0Equitable \u2013 Determining a bias in an AI \u201cblack-box\u201d solution may be the most difficult challenge facing CISOs. Yet, it may be the most important, as a bias will (not may) bring unintended consequences.Traceable \u2013 Black boxes are not the CISO\u2019s friend. If you are unable to provide provenance for the information being revealed through the interrogation of your large language model, then you are merely hoping a bias isn\u2019t present or that the engine isn\u2019t just \u201cbest guessing\u201d on your behalf. As the DoD emphasizes, transparency and auditable methodologies are your friends.Reliable \u2013 Is there such a thing as 100% reliable? With AI and machine-speed decision-making, reliability is foundational.Governable \u2013 There will be many large language models, some for general use, and others for specialized use, designed for specific functions. The DoD\u2019s recommendation to build in the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior is paramount.No one wants to have a situation where AI-empowered tools move at machine speed, make decisions that on paper should protect the enterprise, yet end up creating consequences that may or may not be detectable. Embracing the ethical pillars of responsible AI as detailed by the DoD is not a heavy lift, though it may be an inconvenient one. All in the cybersecurity realm understand the threat that \u201cconvenience\u201d is to security, and thus investment in \u201cthe need to absorb the inconvenience\u201d will be one more task put upon the CISO\u2019s already full plate.