• United States



Mary K. Pratt
Contributing writer

How Code42 automates insider risk response

Jul 11, 20226 mins
CSO50Incident ResponseRisk Management

When insiders exhibit risky behaviors, good-natured bots reach out to provide support in this CSO50 award-winning project.

Robot Artificial Intelligence chat bot
Credit: Thinkstock

Jadee Hanson’s security analysts are always on the lookout for risky behaviors, so it’s not surprising that they spot their business-unit colleagues sometimes acting in concerning ways, such as publicly sharing a document that might contain sensitive data.

When that happens, an analyst reaches out to the colleague to determine whether he or she violated any security rules and to confirm he or she understands the company’s cybersecurity best practices.

Hanson, the CISO and CIO of Code42, a cybersecurity software company, sees value in that outreach.

She says it can catch and correct problematic behavior, provide an opportunity for security awareness training and identify a potential policy breach at an early stage.

Yet she says she also saw that the number of incidents requiring security’s attention was impacting the security team’s workflow.

For example, in 2021 the security team handled 235 incidents in which sensitive, administrator-level credentials were used. Each incident took approximately 10 minutes to address—sometimes more, totaling some 20 hours of FTE time that year; moreover, each incident interrupted the security analysts’ existing work and drew his or her attention away from their tasks.

At the same time, employees sometimes felt uneasy, as though security was watching and monitoring their every move; that in turn led to some tense interactions and reinforced negative stereotypes about securing being the department of “no.”

Determined to find a better way, Hanson and her team turned to automation.

They designed and deployed chatbots for insider risk security incident response. And they trained the bots using their empathetic investigation framework to ensure the bots (and by extension security) had positive engagements with their fellow employees.

Jadee Hanson, CISO and CIO, Code42 Code42

Jadee Hanson, CISO and CIO, Code42

The goal: to use automation and chatbots to more efficiently handle incidents while also delivering positive touchpoints between the security team and employees and allowing security analysts to focus on more strategic cybersecurity work. The project earned Hanson and her team a 2022 CSO50 award for security innovation.

“In security, I think we struggle quite a bit just keeping up, and that leads to a lot of us getting burned out as we try to keep up with alerts and all the other daily stuff,” Hanson says. “So my team created robots for something that we do on a very consistent basis while keeping the same tone and culture that we want to carry into the organization.”

Advancing the use of automation

Code 42’s security department was using automation to streamline tasks within its operations work prior to deploying the bots for insider incident response management, so Hanson says using the technology to connect with the company’s own employees was a natural extension of its ongoing automation work.

Yet Hanson wanted to retain that human touch with these chatbots. Hanson explains that she and her team wanted the bots to take on a collegial tone that would mimic the goodwill that her security workers seek to engender when working with their business-side colleagues.

“You want to make sure those interactions between the bots and the employees have the same tone and culture that security is trying to bring to the organization. So I’m super picky about the messaging, because I want to make sure it comes off with the right tone,” Hanson says.

“The words that a security team uses when they talk to employees are critical, and sometimes that is overlooked. But they’re really, really important. I want my security team perceived as helpful, so we have to think about how we want the company to think about us, especially when we reach out in communications. We want them to say, ‘What a nice message.’”

Hanson says attention to this topic pays dividends for security.

“For a long time, security organizations have sort of blamed the end user, and we say ‘no’ a lot. I think we’re well intended—our job is to protect the organization, so we have to say no and we get angry when end users do something that they shouldn’t do. But I don’t know if we have taken on enough of the accountability. It’s our responsibility to educate the rest of the organization on the right way to do something. It’s also our responsibility to figure out how to say ‘yes,’” she says.

‘First assume positive intent’

To ensure a positive interaction between employees and the bots, Hanson’s security team leaned on what they call an empathetic investigation framework, established from key insights from its own Incydr product.

“When approaching users during an insider risk investigation, security analysts should ensure interactions exercise tact, empathy, and caution,” she explains. “Investigating this way is a shift from how traditional malware security investigations are performed. This new approach was developed over time by insider risk management analysts using Code42 Incydr. It is founded on the principle that employees are trusted experts who are enabled to do their jobs. You wouldn’t treat a colleague the same way you’d treat an external attacker. For a coworker, you’d first assume they acted with positive intent—perhaps they accidentally shared that document too broadly or maybe they didn’t know our data ownership policy because they’re still new.”

Hanson says that the security team has found that their business-unit colleagues in fact are usually unaware that they had taken a risky action.

“A friendly message or phone call asking if the person meant to take that action often cleared things up and the action was corrected quickly,” Hanson says. “We’d then take the opportunity to remind them about a protocol, train, or share resource materials in the moment, which went a long way in making our company culture more security-aware in the long run.”

Given how well that approach works, Hanson says she wanted the bots to act in the same empathetic way.

“The [automated] message looks like it’s coming from my insider risk analyst; even though the bot is doing the work, it uses his tone and people have this sense that he sent the note and he’s really nice about it,” she says.

Hanson says automating insider risk incident response saves time for both security analysts and employees performing high-security activities and, through open and consistent dialogue, fosters the positive relationships between security and other employees.

And, she adds, the automation work so far is fueling further automation within the security program.

It’s a successful journey that Hanson and her team are seeking to share: They’re sharing information about the empathetic investigation framework as well as their technical playbook to be used specifically with Palo Alto Cortex XSOAR and Slack to encourage open source development and contributions back to the security community.