Most organizations are pretty good at vetting job applicants up front. They interview candidates, contact references, and in many cases conduct at least rudimentary background checks to bring out any issues of concern before making a hiring decision.
Government security agencies go several steps further; just ask anyone who's filled out an SF-86 and then waited while investigators delved into youthful indiscretions, overseas trips and contacts with foreigners.
But it's also true that most government and private-sector organizations operate on the principle of "Once you're in, you're in." Few of them have anything remotely resembling a continuous monitoring program for current managers and staff, let alone for contractors and vendors. And yet virtually every day brings fresh news of a data breach, intellectual property theft, or other adverse event either instigated or abetted by a supposedly trusted insider.
Continuous trustworthiness
Years ago, I started writing about a concept I call continuous trustworthiness: the idea that in a world of growing asymmetric threats, an organization has not just a right but an obligation to systematically re-evaluate on a regular basis whether individuals involved in its most sensitive operations pose any kind of risk to its systems, data, facilities or people. It’s something I continue to believe is desperately needed today.
What if your financial advisor were going through a personal bankruptcy or had multiple DUIs? Would you entrust him with your life savings? This is not an idle fiction: In 2014 The Wall Street Journal identified 1,600 brokers who had bankruptcy filings or criminal charges that weren't publicly reported. And their clients had no way of knowing.
+ Also on Network World: Building an insider threat program that works +
Continuous trustworthiness is to my mind a data-informed, analytical way to dynamically prioritize (and reprioritize) the risk a person's actions pose to an enterprise. It requires that we first build a mathematical model with predetermined thresholds for what trustworthy behaviors and characteristics—and threatening ones —look like. Then relevant data can be identified and applied to the model so that significant issues, such as a felony arrest, are known or so that deviations from a person’s normal life patterns may be detected early—perhaps even enabling a manager to offer help to an employee going through difficulties.
The data can take many forms, too. Various kinds of risk-indicating behavior can be discerned in financial, criminal and other public records, as well as from internal repositories, such as performance reviews, travel records and badge scans. The problem is that as more data comes in, more humans are needed to analyze it. Everyone quickly gets overwhelmed, and paralysis (or corner-cutting, at least) occurs. What's needed is a mechanism to automate the process and deploy it continuously at machine—not human—scale.
The technology for this exists today. A financial services company can automatically alert customers about possible fraudulent activity on their credit cards. So, why don't government agencies have a tool to instantly detect when a person with Top Secret clearance has purchased a one-way ticket to China without advance notice? And Uber's remarkable app not only automates calling for a car and paying for a ride, but it also continuously refreshes each driver and customer rating. Which begs the question: Why do conventional taxi companies evaluate their drivers only once (if that), and why don't the drivers have a way of knowing if the latest passenger might pose a threat?
In practical applications of continuous trustworthiness, doctors would be analyzed using different models than, say, financial advisors. And both would be different from the Top Secret-cleared analysts working in sensitive positions. But what these models would have in common is encoding the characteristics of trust and allowing users to apply data to continuously evaluate an individual’s risk.
The value of a continuously evaluating employees
The real value is in using it every day, maybe to prevent an analyst from exploring network drives that contain sensitive data she shouldn’t be accessing when her recent activities indicate a trend towards higher risk. Or use it to curtail the system access of a trader who's received three speeding tickets in the last two months, indicating risk-seeking behavior and poor judgment in his personal life that may spill over into workplace decisions.
It seems every time there’s a major insider event, we later learn it was possible to conclude from readily available evidence that the individuals behind the events were high risk, or at least were deviating from their normal patterns of life. I get that forensics is always easier than detection, but the time has come to pay serious attention to new ways of identifying and preventing such threats early on—in other words, to be more predictive.
I certainly am not suggesting that we need a surveillance state where everything a person does is subject to persistent collection and analysis. In fact, quite the opposite: We must do this while protecting civil rights and liberties and avoiding indiscriminate surveillance. But I do believe we can strike a balance, using analytics to continuously verify that people in trusted jobs don’t present a risk financially, to the safety of those around them, or to national security. We should not rely solely on initial vetting, infrequent personnel reviews or manual analysis, which are manifestly inadequate to the task.