If one were to solicit a list of the developments most often on the mind of CISOs, AI would certainly be near the top and will continue to be for years to come. After all, there is clear evidence that CISOs and cybersecurity professionals more broadly simultaneously see immense risk, opportunity, and potential prosperity in the adoption of machine learning (ML) and other AI developments across every dimension of private enterprise.\n\nMoreover, AI is already deployed by over a third of companies according to the 2022 IBM Global AI Adoption Index and at least 40% of other companies are considering potential uses.\n\nIf AI is going to be a central pillar of cybersecurity developments for the foreseeable future, it\u2019s worth talking about an oddity found in the discourse about its utility. Specifically, much of what is written about AI and cybersecurity splits apart the roles of human operators and the machine systems that will ideally resolve many of the digital world\u2019s security and economic challenges.\n\nThe interaction between machines and humans is seen in quite dualistic terms. Simply put, this means that machines are tools that offer specialized advantages in diverse areas, while humans retain substantial amounts of operational control.\n\nAI CISOs will be authorities on tactics, strategies, and resource priorities\n\nThere\u2019s a degree to which this tendency is understandable. Absent the unlikely near-term development of credible artificial general intelligence (AGI) that can more fully simulate human agency, it\u2019s true that AI systems will be nothing more than narrow-but-powerful exercises in task performance. Even generative AI applications, which increasingly seem likely to revolutionize certain areas of industry, are just pattern detectors that provide impressive predictive capacity given narrow inputs.\n\nAt the same time, however, support systems that are deployed broadly and that operate based on human judgement operationalized in training data, end-user actions, and the structured inputs of developers will inevitably come to act on humans\u2019 behalf and to operate with a degree of trust. After all, tools and models that demonstrate their ability to simulate the strategic, moral, and economic preferences of companies over time will find themselves given more responsibility vis-\u00e0-vis human operators.\n\nThe result is, in the simplest possible terms, the emergence of AI CISOs that will be de facto authorities on the tactics, strategies and resource priorities of entire organizations. Today\u2019s human CISOs would do well to consider what this means for their business.\n\nThe AI CISO will arise out of the arms race between attackers and defenders\n\nImagine the following scenario. It is several years into the future and AI-augmented cyber campaigns of all kinds \u2014 influence operations, espionage activities, counter-critical infrastructure missions, etc. \u2014 are increasingly common. The average compromise of private industry systems occurs several orders of magnitude faster than is true in 2023 and the return on attack per hour of access for cyber criminals is twice or three times better than it is today.\n\nWhere this is not true is in those situations where defensive AI \u2014 whether developed internally or procured from cybersecurity firms \u2014 has been deployed as a countermeasure to thwart intrusions. But such countermeasures are no silver bullet. Rather, they are effective tools that nevertheless seem to be in a state of perpetual beta, as the arms race logic of adversarial AI learning means that good defense feeds improved offense.\n\nThe logical outcome of such a situation is the AI CISO. After all, what has been in human hands for so long will necessarily become the purview of AI response systems. This includes not only basic tasks that respond to decision-making rulesets but also dynamic tasks. In part, this might mean the selection of defensive \u2014 or active defense \u2014 tactics and analysis of adversary strategy.\n\nBut it will also mean value judgements and moral considerations. What kinds of data or data access should be prioritized for protection, for instance, is a judgment call that contains inherently variable ethical foundations that reference shareholder interests, civic responsibilities, profit metrics, governance baselines and compliance standards. At a point, human intelligence and machine intelligence converge in a meaningful fashion.\n\nThe upsides of an AI CISO\/human alliance\n\nThere are potential advantages to this, as has already been alluded to. But there are concerning implications for this inevitable outcome too. For one, if the AI defender of tomorrow is to be best thought of as a sort of distributed machine-human interface, then today\u2019s planners need to recognize that human agency in the future is something that will be represented rather than actively employed.\n\nWe\u2019ve seen this before in the history of disruptive technologies and the outcomes aren\u2019t always stellar. Humans often lose control over social, political, or economic things they once directly shaped when turning to new technology but, alarmingly, the illusion of control often remains. So how can today\u2019s human CISOs plan for the AI CISOs of tomorrow?\n\nThe upsides of leaning into the construction of systems that have de facto authority over diverse facets of the cybersecurity enterprise are fairly clear. If an AI arms race set around the evolution of malign threats to Western enterprise is inevitable, then AI CISOs are the key to allowing defenders to keep up. The tightening timeframe of incident response means that systems based around rapid threat detection and analysis will almost certainly excel where human responders could not.\n\nLikewise, the metrics that stem from the use of such systems will almost certainly lead to iteratively better productions of machine learning models that have clear value structures amenable to threat mitigation prioritization. Efficiency, in short, is the clear advantage of the AI CISO.\n\nThe upside for cybersecurity governance\n\nPerhaps a less obvious advantage is the upside that might be found for cybersecurity governance as a whole. For some time, experts in cybersecurity have been particularly concerned about the threat of cascading negative outcomes that might stem from the augmentation of tools with AI. The flash crash of the stock market in 2010 is often brought up as an example of this nightmare scenario in which many things could wrong faster than a human could act to prevent it.\n\nIn that case, the Dow Jones Index lost almost 1,000 points in 36 minutes as automated sales algorithms reacted to odd market conditions (an accidental sale several orders of magnitude outside of normal parameters, it is often said). While the market recovered, the market impact was a more than $1 trillion loss, entirely due to interacting algorithms.\n\nIt's worth noting that the same logic underlying this common fear might actually play in favor of more standardized norms of responsible practice and accepted threat response in a world where AI CISOs interact with a common set of evolving adversary machine capabilities. It\u2019s a fascinating idea for a space with relatively few norms around defender-attacker engagements.\n\nDeploying AI products that learn best practices from a shared set of industry experiences means a standardization of knowledge about how cyber defense plays out in practice. For both the federal government and private governance initiatives, the cascade of such activities as the new normal of cyber defense offers enticing touchpoints for coordinating shared rules \u2014 both formal and informal \u2014 around cybersecurity as a national security consideration.\n\nThe potential for missteps\n\nAs appealing as the idea of AI CISOs that can effectively take the priorities and security requirements of human operators and execute them against rising offensive AI threats may be, the potential for missteps is also substantial.\n\nAs any lay user of an LLM like ChatGPT will tell you, the opportunity for outright inaccuracy and misinterpretation in the use of any AI system is noticeable. Even assuming defensive AI systems can be brought within acceptable margins of usability, there is real danger that the humans in the loop will believe they control outcomes that are beyond their ability to shape. In part, this might stem from a willingness to accept AI systems for what they appear to be \u2014 powerful predictive tools. But research into machine-human interactions tells us that there\u2019s more to consider.\n\nRecent work has emphasized that businesses and organizational executives are prone to overusing systems where the paradigmatic transformation of an existing company function has been promised or large investment in a specific application has already occurred. In essence, this means that the bounds of what might be possible for such procurements gradually expands beyond what is practical, largely because the positive associations made by stakeholders with \u201cgood business practice\u201d creates tunnel vision and wishful thinking effects.\n\nThere is a tendency to assume AI has human qualities\n\nAnd with AI, this tendency goes further still. As with any sufficiently novel technological development, humans are prone to over-assign positive qualities to AI as a game-changer for almost any task. But psychological studies have also suggested that the customizability of AI systems \u2014 wherein an AI model might be capable, for instance, of building machine agents with distinct styles or personalities based on the breadth of training data \u2014 pushes users towards anthropomorphizing.\n\nAssume that a cybersecurity team at a financial firm calls their new AI tool \u201cFreya\u201d because the real name of the application is the \u201cForensic Response and Early Alarm\u201d system. In representing their AI system to executives, shareholders, and employees as Freya, the team communicates a human quality to their machine colleague. In turn, as research tells us, this inclines the average human towards assumptions about trustworthiness and shared values that may have no basis in reality.\n\nThe possible negative externalities of such a development are numerous, such as company leaders being dissuaded from hiring human talent because of a false sense of capacity or a willingness to discount discomfiting information about the failures of other companies\u2019 AI systems.\n\nWill reliance on AI systems lead to loss of human expertise?\n\nBeyond these possible downsides of the coming age of AI CISOs, there are operational realities to consider. As several researchers have noted, reliance on AI systems is likely to be associated with a loss of expertise at organizations that otherwise maintain the resources to hire human professionals and retain an interest in the skills they might bring.\n\nAfter all, the automation of more elements of the cyber threat response lifecycle means the minimization or removal of humans from the decision-making loop. This might occur directly as companies see that a human professional just isn\u2019t often needed to conduct oversight on one or another area of AI system responsibilities. More likely, however, expertise loss may occur as such individuals are given less to do, prompting their migration to other industry roles or even a move to other fields.\n\nOne may ask, of course, why this would universally be a bad thing if such expertise is not often needed. But there\u2019s an obvious answer \u2014 a lack of controls that prevent bias and emotion to impact security situations. And the flattening of the human employee workforce at a company around novel AI capabilities also implies a poorer relationship between strategic planning and tactical realities.\n\nAfter all, effective cyber defense, and long-term planning around socio-economic priorities \u2014 business interests, reputational considerations, etc. \u2014 as opposed to mere technical ones requires robust intellectual (read: human) foundations.\n\nFinally, as others have observed, the coming age of AI CISOs is associated with the potential for autonomous cyber conflicts that emerge more from flaws in underlying models, bad data, or odd pathologies in the way that algorithms interact. This prospect is particularly concerning when one considers that AI CISOs will inevitably be assemblages of baked-in moral, parochial, and socio-economic assumptions. While this suggests a normalization of defense postures, it also acts as a basis by which the human qualities of AI systems might be systematically leveraged to create vulnerability.\n\nHuman-machine symbiosis is coming\n\nRecognizing that the logical outcome of the trajectory we find ourselves on today is a de facto symbiosis between human and machine systems is of paramount importance for security planners. The AI CISO is far less of a \u201cwhat might be\u201d and more something that inevitably will be \u2014 a real reduction in our control over the cybersecurity enterprise because of developments we will be incentivized to support. To best prepare for this future, companies must consider today the value in cyberpsychological research and the findings of work on technological innovation.\n\nSpecifically, companies across private industry would do well to avoid the situation where an AI CISO imbued with ethical and other sociological assumptions develops without prior planning. Any organization that envisions a robust AI capability as part of its operational posture in the future should engage in extensive internal explorations of what the practical and ethical priorities of defense look like.\n\nThat, in turn, should lead to a formal statement of priorities and a body that is charged with periodically updating these priorities to reflect changing conditions. Ensuring congruence between the practical outcomes of AI usage and these pre-determined assumptions will obviously be a goal of any organization, but waiting until AI systems are already operational risks outcomes that are more encultured by AI usage than by independent evaluation.\n\nEmploy the tenth-person rule\n\nAny organization that envisions extensive AI usage in the future would also do well to establish a workforce culture and structure oriented on the tenth-person rule. This rule, which many industry professionals will already be familiar with, dictates that any situation leading to consensus among relevant stakeholders must be challenged and re-evaluated.\n\nIn other words, if nine of 10 professionals agree, it is the duty of the tenth to disagree. Anchoring such a principle of adversarial oversight at the heart of internal training and retraining procedures can help to offset some of the possible missteps to be found in expertise and control loss stemming from the rise of AI CISOs.\n\nFinally, inter-industry learning around what works for AI cybersecurity and related tools is a must. Specifically, there are strong market incentives to try products that are convenient but that may fall short in some other area such as transparency about underlying model assumptions, training data, or system performance. Cybersecurity is a field ironically prone to path-dependent outcomes that see insecurity generated by the ghosts of stinginess past. Perhaps more so than with any other technological evolution in this space in the last three decades, cybersecurity firms must avoid this selection of convenient over best. If they do not, then the coming age of AI CISOs may be one fraught with more pitfalls than promise.