• United States



Contributing writer

Corporate pre-crime: The ethics of using AI to identify future insider threats

Aug 20, 201813 mins
AnalyticsLegalMachine Learning

Remember “Minority Report”? Artificial intelligence can spot employee behavior that suggests a future risk. Here’s how to use that data ethically and effectively.

virtual eye / digital surveillance, privacy / artificial intelligence / machine learning
Credit: Vijay Patel / Getty Images

To protect corporate networks against malware, data exfiltration and other threats, security departments have systems in place to monitor email traffic, URLs and employee behaviors. With artificial intelligence (AI) and machine learning, this data can also be used to make predictions. Is an employee planning to steal data? To defraud the company? To engage in insider trading? To sexually harass another employee?

As AI gets better, companies will need to make ethical decisions about how they use this new ability to monitor employees, particularly around what behaviors to watch out for and what interventions are appropriate. Information security teams will be on the front lines.

In fact, some types of predictions about employee behaviors are already possible. “The reality is that it’s really easy to determine if someone is going to leave their job before they announce it,” says one top information security professional at a Fortune 500 company, who did not want to be named. “I started doing it ten years ago, and it’s actually highly reliable.”

For example, an employee about to leave the company will send more emails with attachments to their personal address than usual, he says. This is important for security teams to keep an eye on, since departing employees might want to take sensitive information with them when they go, and they will try to download everything early, before they tell their managers about their plans.

This is a valid security concern, and employees are notified ahead of time that the company monitors their work emails. “Most of the time, if we know the person is leaving, we put them on a high-risk list of users that have additional controls in place,” he says.

He wouldn’t tell the employee’s manager that the employee was planning to go, he added. “We’ve never done that and I don’t see a situation where we would do that,” he says. “And we’ve had dozens of those situations.”

If the employee is caught stealing information, then that’s a different story. “We will alert the manager, and we talk to the employee about it,” he says.

When is it appropriate to read employees’ email?

Most companies notify their employees that they monitor their email communications and internet use. Few companies look closely at employees’ personal communications. “We’ve made the decision not to read the contents of someone’s email,” says Laura Norén, director of research at Obsidian Security, which helps companies use AI and machine learning to spot cyberthreats.

For example, companies can tell if someone is looking for other jobs from their day-to-day activities, Norén says, but it’s not always perfect, since the employee might not get that other job or might turn down the job offer.

Data scientists have a more complete understanding of an individual than ever before, Norén says. “If someone is concerned that they might have cancer, they’re probably typing that stuff into the search field, and you might know about it before they even confirm the diagnosis,” she says.

“There are companies that want to predict much more. Are [employees] using drugs, hiring sex workers, having in-office affairs?” Norén adds. “We don’t go there, but we know that other companies are doing it.”

One potential risk of monitoring employee behavior too closely is that it hurts staff morale. “Employees do give their consent, either explicitly or implicitly, to be surveilled,” Norén says. “So, it’s legal. It’s not a problem; they signed away that right. But nobody ever reads those documents, and we tend to continue to remind employees that we’re watching.”

In some fields, like financial services, employees get regular reminders that their communications are monitored. “We’d like to reinsert that into most workplaces,” Norén says.

It’s not just email and browsing history that are potential fodder for AI systems. “It is a short jump to apply the same intelligence and analytic tools to softer data gathered about employees from a variety of other sources,” says Thomas Weithman, managing director at CIT GAP Funds, a family of investment funds focusing on early-stage tech startups. Those include reported interactions with other employees to information drawn from security cameras and building access control systems, he says.

When do you act on predictions?

Once a company has collected and analyzed data, and has a prediction about some potentially harmful behavior, it has a number of steps it could take, anything from ignoring the prediction to firing the employee. In some cases, there’s a legal line that companies shouldn’t cross, says Obsidian’s Norén. “Something like pregnancy is a protected status,” she says. “It would be a bad idea to fire someone if they’re searching around for fertility firms.”

In other cases, it’s more of a gray area. Say, for example, an employee is making plans to exfiltrate sensitive data or set up an unapproved cryptomining rig on company servers. That could be of interest to the security team.

One way to look at potential interventions is whether they are likely to cause harm, Norén suggests. “If you fire them, or you cause them to be stigmatized in some way. For example, talking to their manager might be stigmatizing.”

One approach for dealing with potentially harmful behavior is to check if it is a symptom of a larger problem. “If they’re planning on setting up a cryptocurrency operation on the company servers, maybe they’re not the only ones thinking about doing it,” Norén says.

In that case, a company might consider a wider response. “Maybe we should set up rate limits company wide, so it doesn’t single that employee out in any way, and they’re not punished for something they haven’t done,” Norén says.

Similarly, if an employee is about to leave a company, there might be other people who are also about to leave but are more subtle about it. “You might want to take action helpful to that employee and similar employees,” Norén says. “That might be a positive intervention, so they’re not being punished.”

Some decisions will be extremely difficult. “We can predict sexual harassment, probably with a fair degree of accuracy,” says Norén. “In the “MeToo” era, that could certainly affect companies and it’s an ethically fraught domain.”

Stepping in and punishing the would-be harasser, or reassigning the targeted employees, could be disruptive to the company. “We would try to work with social scientists who are experienced in these kinds of non-consensual relationships to come up with an intervention that doesn’t punish someone for something he hasn’t done but removes some of the other factors that would need to be in place for the sexual harassment to occur,” Norén says. “You might have fewer events where there is drinking, for example.”

This is an issue that’s very much top of mind for the Los Angeles County Department of Human Resources. LA County has 111,000 employees and is currently updating policies and procedures around interpersonal relationships to protect employees from harm.

“LA County has been recognized as a pioneer, not just in government but in the private sector, when it comes to equity issues,” says Murtaza Masood, the department’s director. He was previously the CIO for the department and is leading many of the digital transformation initiatives for the county, including using AI for HR investigations.

“Now we want to deploy AI and process automation and gain insights in behavior patterns and issues,” Masood says. “But the goal isn’t individualized tracking and profiling — that’s where it crosses an ethical line and a legal line.” The technology isn’t there yet to predict individual behavior, he adds.

Instead, the goal is look for factors that contribute to certain behavior patterns, to look for clusters of behaviors, and to create training policies and interventions that enhance good behaviors and minimize the bad, Masood says.

The county is using technology from OpenText to track what people are doing on their desktops and emails. “Then we can see where trends are forming as they are forming,” Masood says. “It’s about being proactive and responding before the fact, not after the fact.”

But there are certain situations where immediate and individual intervention is warranted, and that’s in the area of cybersecurity. If an employee is acting in a way that indicates a potential security problem, then a conversation with that employee or their manager might be appropriate, Masood says. “Is there a business reason causing them to do that, or is there something untoward happening?”

For example, if an employee is using someone else’s computer, logging in from unusual locations, and trying to export massive amounts of data, it could be benign, he says, or it could be an indicator. “But there’s a difference between that and using AI and machine learning for deterministic reasons,” Masood says. “We’re not there yet. Maybe in my lifetime I’ll see it, but for machines to be able to determine intent, that bridge hasn’t been crossed yet.”

The proper starting point is an assumption of neutrality, Masood says. “That’s the key difference with the ‘Minority Report’, where you’re judge and jury.”

Seek the help of privacy experts

Three years ago, Aetna, the country’s third-largest health insurance company, began looking for better authentication technology. “Passwords are becoming obsolete,” says Aetna CSO Jim Routh.

To help with authentication, Aetna is collecting behavioral data, both for customers and for employees. “Technology allows us to capture behavioral information, but some attributes are not benign and could cause damage to an individual’s privacy,” Routh says.

That behavioral information, combined with machine learning analytics, allows a company to potentially know a great deal about a person. Maybe even too much. To avoid a “Minority Report”-type of situation, Aetna imposed limits on what information it would collect and how it would use it.

First, it looked toward the company’s core values and brought in some experts, to decide what data points should and shouldn’t be collected. “We took our chief privacy officer and her team, and we locked them in a room to help us choose the ones that would not cause privacy concerns,” he says. “We eliminated about 20 to 25 percent of the attributes that we could have captured.”

For example, Aetna could collect browser history information on consumers and employees, but that was a easy one to exclude, Routh says. “That’s one of the attributes that we will never use across any of our platforms, either for employees or customers.”

Next, the behavior information collected by, say, the mobile app, was processed so that the behavioral patterns stored for each user, and the new behavior patterns to be evaluated, were never in the clear. “If any of that information was exposed, there’s no privacy sensitivity. It’s a bunch of numbers and formulas,” Routh says. For example, geolocation is used sparingly, and only for comparison purposes. It is never stored.

Even if a hacker got access to the risk engine and was also able to decode the formulas, those formulas are constantly changing, Routh says. “Our values help us determine the actual control design,” he says. “And we designed a risk engine that would accept algorithmic formulas rather than the actual attributes and characteristics.”

Values and transparency — and human judgment — should be at the heart of any corporate policy about how AI should be used in predicting employee behavior, says Kurt Long, CEO and founder at FairWarning. “Always, there should be a human in the loop,” he says. “AI must be seen as supervised learning systems, with humans making the final decisions. So, you set up an ethics policy based on transparency and values alignment, and you enforce the policy by keeping a human in the loop.”

Companies that are using AI to predict and stop dangerous employee behaviors need to balance their need to protect their employees, their customers, and the company itself against the rights of the employees. “There are indications that U.S. state and federal law may be moving in the direction of stronger protections for individual privacy, possibly including employees,” says attorney Jaime Tuite, an employee issues expert and shareholder at Pittsburgh-based law firm Buchanan Ingersoll and Rooney.

In addition, the AI technology itself may be inaccurate — or unfair. “If the test found that people who got up more from their workstation are more likely to be harassers, and a group of women got up more frequently because they were pregnant or pumping milk, the test could be unfairly biased against pregnant women,” Tuite says. “If the test is validated — not biased — and reliable, the employer could use it, just as it can use other personality tests or random allegedly objective measures to terminate whoever it wants,” she says.

The key is to apply the rules consistently, she says. “If an employer is going to take action based on something it learns from AI (for example, AI flags an inappropriate online posting), it has to decide at what point it is going to take action, and then make sure that it does so in all future cases.”

The action also has to be appropriate. For example, if someone tells HR that a colleague is mis-using medical leave to go on vacation, the appropriate response would be to investigate, even if there isn’t yet any actual evidence of wrongdoing. Disciplinary action would follow after the investigation.

“The situation would be no different if an employer learns about potential wrongdoing via an AI system,” Tuite says. “It is all about risk and consistency in treatment.”

If the AI prediction is accurate and the company fails to act to protect employees, that could potentially lead to legal liability, Tuite adds. “The company could be exposed to a claim for negligent retention of the employee if it knew or should have known that the employee was going to sexually harass or otherwise act inappropriately toward fellow employees,” she says. “While these claims are typically fact-intensive and difficult to prove, employers are already facing this issue to some degree when it comes to monitoring employees’ social media usage and posts online.”

Companies should be careful about jumping in to using AI to monitor employees for potential misconduct, Tuite says, until there is greater clarity about the legal rules governing the use of AI in the workplace — and the AI systems are proven to be reliable. “Otherwise, it could be exposed to liability for relying on a system that may disparately impact a group of people,” she says.