Phishing and social engineering schemes have been making headlines more than ever. From spear phishing to whaling to BEC (Business Email Compromise), these tried and true tactics continue to be successful.
How many times have we all almost clicked on that email from “Microsoft” asking us to re-enter our credit card data for our Office 365 account? Or the “Apple” message asking us to click on a link to confirm that we didn’t make an iTunes purchases? Or the “DocuSign” document link from a financial institution? All these looked real at first blush, particularly when viewed on a small smartphone screen. But none of them were. And all could have had drastic consequences if the links were followed.
Wouldn’t it be great if there was some way to help end-users check those potentially malicious links and documents and scan the content to make sure that opening them won’t unleash devastation? Actually, there is: artificial intelligence (AI). AI solutions are just getting started, but as the technology becomes more widely adopted, we will begin to more effectively mitigate the impact of phishing emails, just the way that machine learning has helped to eliminate spam.
How to fight phishing
There’s no question that phishing still presents a major headache. Some 91% of cyberattacks start with phishing. Such emails usually play on a reader’s curiosity, fear or sense of urgency. Even highly-educated, web-savvy users can fall victim to phishing. Most famously, Hillary Clinton’s campaign manager, John Podesta, recognized an email as a phishing attempt but famously told an aide that the email was “legitimate” when he meant “illegitimate.” The typo led to a massive email hack that played a big part in Clinton’s loss when a subordinate changed Podesta’s email, as the malicious email had requested.
As such attacks illustrate, hackers can learn a bit about us and craft an email that’s plausible enough to fool us. Phishing takes advantage of the fact that we’re often harried at work and not thinking clearly.
Security Awareness Training company, KnowBe4, has stepped up to the plate and started to talk about Artificial Intelligence Driven Agent or AIDA. It’s what they call a “smart sidekick that trains your employees to make smarter security decisions.” Using AI, the goal is to dynamically create integrated campaigns that send emails, text and voicemail to an employee, simulating a multi-vector social engineering attack.
But knowing that employees will continue to make mistakes, machines also need to assist in other ways. Google uses machine learning to block spam. This year, it also started using machine learning for phishing detection. Google’s machine learning model delays about 0.05% of messages to perform rigorous phishing analyses. As a result, it blocks 99.9% of spam and phishing messages.
Not everyone has Gmail though. And most don’t have solutions like those provided by ProofPoint, which verifies links in emails. Most of phishing’s damage will be done at the corporate level and businesses aren’t always interested in paying more for enhanced email security. Even if they do, a lot of these existing security solutions are simple and ineffective, looking at links for patterns that indicate potential malicious intent.
These solutions are only as good as their existing knowledge, and AI can expand the consideration set for combinations of forwarding addresses, payload data and even text or graphics discrepancies automatically, looking across vast datasets of emails. While we are in the early stages, there are some current vendors who have some intelligence under the hood.
A generation ago, antivirus software was based on rules, blacklists and whitelists. Now we’ve got providers like Cylance that are using systems that learn and can automatically determine whether a file is malicious before the first human detects it, helping prevent a zero-day attack. I see the same enormous potential for combating phishing with AI.
Of course, as security companies perfect their phishing defenses using AI, hackers will try and find another way in. Until then, AI might make phishing attacks less common. And we all might finally be able to open those jokes and YouTube video links that our friends and relatives send us.