Congressional hearings focus on AI, machine learning challenges in cybersecurity

Talent shortages and ensuring that AI and machine learning systems are trustworthy are among the biggest concerns explained to the U.S. Congress.

artificial intelligence brain machine learning digital transformation world networking
Getty Images

Congressional hearings on artificial intelligence and machine learning in cyberspace quietly took place in the U.S. Senate Armed Forces Committee’s Subcommittee on Cyber in early May 2022. The committee discussed the topic with representatives from Google, Microsoft and the Center for Security and Emerging Technology at Georgetown University. While work has begun in earnest within industry and government, it is clear that much still needs to be done.

The hearing chair, Senator Joe Manchin (D-WV), articulated the importance of AI and machine learning to the armed forces of the United States. Additionally, the committee highlighted the “shortfall of technically trained cybersecurity personnel across the country in government and industry alike.” This perspective aligns with the Solarium Commission report, which was subsequently released in early-June 2022.

Google: 3 reasons machine learning, AI matter to cybersecurity

Within the context of the Department of Defense, Dr. Andrew Moore, director of Google Cloud Artificial Intelligence noted, the importance of using AI in three ways. The first is the use of AI to defend against adversary attacks, the second and third are how to organize data and people. He continued how AI is able to process millions of attacks every second looking out for these attacks to take place, something that far outstrips the capability of a human to process.

With respect to the human side of the equation, Moore emphasized how with “emerging attacks, people ingeniously coming up with new methods, and AI’s coming up with new methods, so you have to learn new patterns or detecting whole new kinds of attacks in real-time.” He shifted to the insider threat issue and highlighted the importance of AI in the implementation of zero trust, where, with AI, human patterns are discernible. Moore clarified that AI without data is “pretty worthless.” He highlighted how siloed data was the nemesis of AI and full interchange of disparate data sets was required for the more complete picture to evolve. 

Microsoft: Cybersecurity personnel shortage troublesome

Eric Horvitz, Microsoft’s chief scientific officer, shared information from the company’s October 2021 Digital Defense Report and highlighted its efforts to engage in accordance with President Biden’s Improving the Nation’s Cybersecurity executive order, EO 14028. In his opening statement, he noted, “The value of harnessing AI in cybersecurity applications is becoming increasingly clear. Amongst many capabilities, AI technologies can provide automated interpretation of signals generated during attacks, effective threat incident prioritization, and adaptive responses to address the speed and scale of adversarial actions. The methods show great promise for swiftly analyzing and correlating patterns across billions of data points to track down a wide variety of cyber threats of the order of seconds.”

Horvitz emphasized that the shortage of cybersecurity personnel was troublesome, citing the 2021 Cybersecurity Workforce Study, placing the global number at 2.72 million cybersecurity positions that are going unfilled. Even when operations teams are running 24/7, there are far more alerts to handle than personnel, leading to the very real threat of teams being overwhelmed. AI, according to Horvitz, “enables defenders to effectively scale their protection capabilities, orchestrate and automate time-consuming, repetitive, and complicated response actions.”

The utility of AI on the security side of the equation can, according to Horvitz, be divided into four groupings: prevention, detection, investigation and remediation, and threat intelligence. While AI-powered cyberattacks are also a reality. With criminal/nation-state adversaries using basic automation, authentication-based attacks, an AI-powered social engineering. His discussion on “adversarial AI” served to highlight the need to continue to invest in R&D which raises the level of robustness of systems. He continued, with emphasis, on the importance of red-team exercises.

Center for Security and Emerging Technology: Focus on AI system trustworthiness

Georgetown’s CSET was represented by Dr. Andrew Lohn, who is the senior fellow on the CyberAI Project at CSET. He touched on three areas of AI importance:

  • AI promises to improve cyber defenses.
  • AI may improve offensive cyber operations.
  • AI itself is vulnerable.

Within his opening statement, Lohn touched on the trustworthiness of systems with, “The United States is among those deploying autonomously capable systems, but our adversaries may not wait to subvert them. There are plenty of opportunities for interference throughout the design process. AI can be very expensive to train, so rather than starting from scratch, a system is often adapted from existing systems that may or may not be trustworthy. And the data used to train or adapt the systems may or may not be trustworthy, too.”

Advice to industry/government

Horvitz’s advice is to “double-down with our attention and investments on threats and opportunities at the convergence of AI and cybersecurity. Significant investments in workforce training, monitoring, engineering, and core R&D will be needed to understand, develop and operationalize defenses for the breadth of risks we can expect with AI-powered attacks.”

Moore, for his part, highlighted the need for continued investments in “training, technology and management.” He called out how “We all have a role to play to prevent and detect threats online. Being transparent with governments, customers, and government entities when it comes to cyberattacks is one of our key principles and is critically important when responding to incidents at scale.”

Lohn noted how, “Cyber operations are still human-intensive both on offense and on defense. And there are few openly reported cases outside of a laboratory environment where AI algorithms were attacked directly.” He continued how the potential of attacks directly on AI systems is not a secret and the reality may be just over the horizon.

In sum, CSO/CISO/CIO if they are not already engaged in AI cybersecurity discussions at the “understanding level” then they should make an adjustment and become engaged. For those who understand and are engaged, then the advice and highlights provided at this hearing have earmarked where you need to ensure your knowledge/capability are aligned and that the pipeline on new techniques, experiences, and most of all compromises is wide open in receive mode.

Copyright © 2022 IDG Communications, Inc.

Make your voice heard. Share your experience in CSO's Security Priorities Study.