Elon Musk has stated that whatever organization manages to 'master’ AI will have access to global control. That’s not just hyperbole. Without proper safeguards in place, truly autonomous and self-learning artificial intelligence would be able to move freely through our hyperconnected digital world, adapt to new digital environments, marshal unprecedented resources, and access virtually any digital resources.
It sounds like something from a science fiction movie, but there are a growing number of experts who see real risks posed by AI, especially in a world where connectivity has outpaced security, and they are calling for regulations and controls. This is especially true when you think about this as an arms race between white hat and black hat combatants.
Part of the risk is related to the way that AI systems are able to learn. Highly supervised incubators spend years carefully cultivating an artificial intelligence with strategic learning tasks designed to teach it to perform specific tasks in a predictable way. Once completed, these systems are usually eased into a “Centaur Model” where humans work alongside automation and AI. That way, systems can be monitored and corrections can be made as the AI system becomes more sophisticated,
Cybercriminals, however, are not as concerned with careful training. The unsupervised learning models they are likely to use in developing AI-based attacks, where speed of development is more important than predictability, are especially dangerous and could potentially be devastating. Today, many cybercriminals regularly release iterations of attacks as part of their development cycle. We regularly watch versions of attacks become more refined and effective as a result of such “real world” testing methods. However, when rapidly developed self-learning and AI-based attack systems are released unsupervised into the wild, things are likely to go sideways quickly.
Over the past year, for example, we have seen cybercriminals repeatedly weaponize millions of unsecured IoT devices and use them as a blunt instrument to take out systems and networks. Over the next couple of years, even with just basic AI in place, we expect to see cybercrime tools begin to leverage automated vulnerability detection and complex data analysis to develop customized exploits based on the unique characteristics of a targeted system written entirely by a machine. It’s a natural evolution of technologies that already exist. As these technologies become more sophisticated, and attack methodologies become more intelligent and autonomous, there is the looming potential for significant damage being done to organizations or even nation states that can’t be stopped.
Current polymorphic malware, for example, has been around for decades. It already uses pre-coded algorithms to take on a new form to evade security controls, and can produce more than a million virus variations per day. But so far, this process is just based on an algorithm, and there is very little sophistication or control over the output. Next-gen polymorphic malware built around AI, however, will be able to spontaneously create entirely new, customized attacks that will not simply be variations based on a static algorithm. Instead, they will employ automation and machine learning to design custom attacks to quickly compromise a targeted system. The big difference is the combination of discipline and initiative.
We’re not talking about future visions or something far down the road. We are already seeing attacks with automated front ends mining for information and vulnerabilities, combined with AI-based analysis to correlate vast amounts of pilfered structured and unstructured data. Of course, these sorts of strategies require massive amounts of computing power. Which is why cybercriminals are also targeting cloud services and public infrastructure to launch and manage their attack campaigns. Many cybercriminal organizations now use high performance computing (HPC) for CPU-intensive attacks such as bit-coin mining or cloud password cracking. They also use distributed computing and processing modes to autonomously discover and learn about weak spots in security systems.
As the world of cybercrime evolves, so does the darkweb. We expect to see new service offerings as Crime-as-a-Service organizations use new automation technology for their offerings. We are already seeing advanced services being offered on darkweb marketplaces that leverage machine learning. For example, a service known as FUD (fully undetected) is already part of several offerings. This service allows criminal developers to upload attack code and malware to an analysis service for a fee. Afterwards, they receive a report as to whether security tools from different vendors are able to detect it. To shorten this cycle, we will see more machine learning used to modify code on the fly based on how and what has been detected in the lab in order to make these cybercrime and penetration tools more undetectable. This allows them to quickly refine their technology in order to better circumvent security devices used by a targeted company or government agency.
In order to perform such sophisticated scanning and analysis, however, criminal service providers have had to create computing clusters leveraging hijacked compute resources. Infected machines leveraging Coinhive is a latest example – browser plugins infect end-user machines to hijack their CPU cycles to mine for virtual currency. This process rapidly accelerates the time from concept to the delivery of new malware that is both more malicious and more difficult to detect and stop. Once true AI is integrated into this process, offense vs. defense (time to breach vs time to detect/protect) will be reduced to a matter of milliseconds rather than the hours or days it does today.
The best defense against such threats is the development of “expert systems,” which are a collection of integrated software and devices programmed that use artificial intelligence techniques to solve complex problems. An example of such an expert system is the security fabric. Highly aware, tightly integrated, and proactive security defense systems are really the only way to keep pace with, or get in front of, the intelligent attacks headed our way. And just as with AI, whoever can get the fabric-based security system right will be in the best position to help organizations survive the next generation of threats. The goal is not only to develop networks that can withstand serious and sustained attacks, but ones that can also anticipate and thwart attacks before they happen. This then becomes the foundation for Intent-Based Security.
Like it or not, this is a winner-takes-all arms race. Organizations that fail to prepare now may not be able to catch up once this race moves to the next level of sophistication.
Read more on Fortinet’s FortiGuard 2018 Threat Landscape Predictions.