7 guidelines for identifying and mitigating AI-enabled phishing campaigns

Phishing has always been a thorn in the side of enterprise cybersecurity, and recent AI developments such as ChatGPT are making things even worse. Here are some guidelines for dealing with the increasingly sophisticated phishing threat.

email popup warning window phishing cybersecurity security

The emergence of effective natural language processing tools such as ChatGPT means it's time to begin understanding how to harden against AI-enabled cyberattacks. The natural language generation capabilities of large language models (LLMs) are a natural fit for one of cybercrime’s most important attack vectors: phishing. Phishing relies on fooling people and the ability to generate effective language and other content at scale is a major tool in the hacker’s kit.

Fortunately, there are several good ways to mitigate this growing threat. Here are seven guidelines for readiness in the age of AI-enabled phishing:

Understand the threat

A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool. At present, the most important area of relevance around AI for cybersecurity is content generation. This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that’s basically anyone with an internet connection.

“Looking for bad grammar and incorrect spelling is a thing of the past — even pre-ChatGPT phishing emails have been getting more sophisticated,” says Conal Gallagher, CIO and CISO at IT management firm Flexera. “We must ask: ‘Is the email expected? Is the from address legit? Is the email enticing you to click on a link?' Security awareness training still has a place to play here."

Gallagher highlights research from cybersecurity company WithSecure that demonstrates a series of interactions with ChatGPT wherein the AI generates effective phishing emails. This and other research confirms what we ourselves can confirm, that safety rails intended to stop AI tools from being used for illegal purposes are not reliable, and custom tools are being built for these purposes.

We must recognize that AI can be used now to generate effective content and that it is going to get better at it. LLM tools will improve, they will become more available to hackers, and custom tooling will be created for them. Now is a good moment to start thinking about and taking steps to strengthen security policies.

We must also expect phishing content to become not just more compelling but better targeted, able to incorporate specifics of time, place, and events. Employees can no longer rely on obvious signs that an email is malicious. Images, even audio and video, can be faked with content generation techniques. It must be continually reiterated that any unexpected email is suspect.

Mindset and culture are the main defenses

“Ninety percent of cybercrime victimizations easily could be prevented if end users were armed with a few key pieces of knowledge,” Scott Augenbaum, a retired supervisory special agent of the FBI Cyber Division, tells CSO. “Why don't we start there? Unfortunately, everything else costs money and does not appear to be working. I wish someone would tell me I was wrong so I can really retire.”

“Your first line of defense is becoming your own human firewall,” Augenbaum says. That is to say, the human mindset is the centerpiece of cybersecurity. Therefore, the cultivation of that mindset within an enterprise is key.

“Culture eats strategy for breakfast and is always top-down,” says KnowB4 CEO Stu Sjouwerman. The day-to-day thinking and behavior of employees is the baseline immune system for the enterprise — consistent training of employees for security consciousness is key. With AI-enabled phishing, the important message is that email and other communication should not be given weight based on the polish and sophistication of its language. Phishers no longer fail the laugh test, and a higher degree of vigilance is now demanded of employees.

Emphasize taking action properly

Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action. This is where we can install a tripwire in our mindsets: we should be hyperaware of what it is we are acting upon when we act upon it. Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: “Is the content I’m looking at legit, not just based on its internal aspects, but given the entire context?” The second ring of defense in our mentality then has to be, “Wait! I’m being asked to do something here.”

When users go a step further after receiving a phishing attempt, that’s a big win for bad actors: only with that element in place can an attack proceed. Security professionals should train themselves, employees, and anyone else who’ll listen to hear alarm bells when prompted to enter information or run an unfamiliar application.

Of course, when doing something like wiring money, the sense of caution should be elevated. With deepfakes, there have even been instances of employees believing their superiors have sent them legitimate directions to send money. Communication of high importance should be verified in a second, un-phishable channel.

“Everyone’s first response should be to visit the organization directly and look for a message versus clicking on a link,” says Bob Kelly, director of product management at Flexera.

Run phishing simulations

The only way to see how well a business is doing to combat phishing is to run tests. Running phishing campaigns using AI-generated content is an important part of countering the threat. Running an effective campaign is a topic of its own, but the root of a good one begins with setting concrete goals; metrics that can be measured should be used to guide testing. A good example is to measure how frequently phishing emails are reported and then move the needle on that indicator.

In building an anti-phishing campaign, it will also help hammer home how useful AI tools can be in enabling effective content generation. This will help reinforce the need for taking the problem seriously. “While AI is persistent, it’s possible for your security to be resilient by frequently reinforcing security best practices, and putting them to the test,” JumpCloud security engineer Trevor Duncan tells CSO. “If you aren’t currently engaging your employees in simulated social engineering attacks, that’s a great item to add for a 2023 plan to improve your security posture and bring resilience to your security program.”

Incorporate tools that automate AI detection

OpenAI (the company behind ChatGPT) and others have released tools to detect AI-generated text. Such tools will continue to improve alongside NLP generators, and they can be integrated and automated to help detect malicious content. Many vendors of email scanning tools are starting to leverage AI to help fine-tune how they understand contexts such as metadata and location when assessing what is legitimate content. Fighting fire with fire — in this case, using AI to fight AI — is an important part of the future of cybersecurity.

Phishing detection is a key part of an overall network and infrastructure strategy and it’s particularly effective when AI-assisted infrastructure reconnaissance and infiltration is met with AI-assisted detection and prevention. Many top security firms are moving to incorporate such tools in their offerings, such as Okta and DarkTrace.

“Bots are an effective tool for attackers, because they leverage AI and machine learning to rapidly adapt to, and overcome changing security posture,” Jameeka Green Aaron, CISO, customer identity at Okta, tells CSO. “If we want to stay ahead, we should be leveraging automation that’s built to ingest real-time threat intelligence, and adaptive authentication, which is a method for verifying a user’s identity based on factors, such as location, device status, and end-user behavior.”

AI detection is an active frontier in machine learning research. That research will continue to be brought into the enterprise as a tool to fight AI-enabled phishing, and should be a space we watch closely in the coming months.

Provide an easy mechanism to report phishing

Alerting security about phishing is essential in dealing with AI-enabled attacks. Because AI campaigns can be more efficiently mass-produced, recognizing them as they unfold is important; it allows you to inform employees quickly and provides critical input for anti-phishing tools and AI detection models.

In addition to making it easy to file a report, make sure the mechanism captures as much information as possible to improve its value and make it actionable. Forwarding an email to a reporting address is good for capturing all the headers and metadata in an email and a portal with a simple form is good for reporting phishing websites and the like. Governments are increasingly encouraging organizations to include DMARC (domain-based message authentication, reporting, and conformance) policies, including CISA, which provides a number of recommendations.

The phishing report is a vital part of any robust security infrastructure and effective reporting becomes especially important in the context of AI campaigns because of the improved ability of attackers to scale spear-phishing style attacks (attacks that incorporate specifics from within the organization) by automating, gathering, and incorporating such information. This is a good aspect to focus on when running tests of phishing detection and reporting systems.

Incorporate phishing-resistant authentication

Password-based authentication is inherently susceptible to phishing with techniques such as Captcha particularly vulnerable to AI. On the other hand, there are authentication approaches that are resistant to phishing. Passkeys are probably the most phishing-resistant mode of authentication. These are still being developed and deployed but are becoming more mainstream. Once adopted, they are basically unphishable.

Multifactor authentication (MFA) also helps, because simply exposing a username, password combo on a phishing site or interaction isn’t enough for a hacker to gain access to a resource if a secondary authenticator is required. CISA has published an overview of phishing-resistant MFAs.

Copyright © 2023 IDG Communications, Inc.

7 hot cybersecurity trends (and 2 going cold)