The thorny issue of verifying humans

How a more probability-based approach to verification makes for better customer engagement.

Anonymized identity
Turinboy (Creative Commons BY or BY-SA)

The old adage, by Peter Steiner of the New Yorker “on the Internet, nobody knows you’re a dog” has never been truer. As an organization, trying to build an identity system for external users you need to know who you are dealing with. But proving that you are, who you say you are...in an online age, is proving to be the tautology of the century. Identity assurance has become the digital equivalent of picking a sore. We keep trying different ways of doing it, scratching away at the surface of what makes me, me, and not you. But in the end, it still seems to come down to onerous measures of checking some artifacts, then passing or failing that person. This binary attitude to setting measures of identity is not natural and it isn’t working. This is exemplified by the latest figures from the UK Government Verify identity scheme, which requires assurance at a high level, and shows a failure rate of 54% in July.

We need to up our game on verification to build successful, engaging, and useful Customer Identity Access Management (CIAM) systems.

Online identity systems do not need to reinvent the wheel, we need to replicate it, and the first stone that needs to be overturned is the thorny issue of verification.

The probability of pulling the thorn

We have a situation with verification, at present, where it is like an on/off switch. You either pass or fail to get to the required ‘level.’ The concept of levels is the issue. Levels of assurance (LOA) is the idea that an individual can achieve a set level (integer) based on providing proof of their identity. But in the latest NIST advisory Special Publication 800-63-3 Digital Identity Guidelines, they finally retire the concept of the LOA. Instead, NIST has broken down LOA into three, still, integer based, properties:

For non-federated systems:

  • Identity Assurance Level (IAL) - Identity proofing (verification)

PLUS

  • Authentication Assurance Level (AAL) the credentials associated with the identity

Federated system will also need an additional factor:

  • Federation Assurance Level (FAL) which hinges upon the strength of an assertion

NIST, however, has not gone far enough. The problem is this. Identity, and many of the attributes that make up who you say you are, have fuzzy properties and can be pliable. Identity isn’t just made up from static objects, like your name and social security number. It is also made up of things that change over time. In fact, it is these fickle objects that can add some of the greatest value in your relationships with customers; value such as people's preferences and interests.

Fuzzy properties and changeable objects, as it turns out, are a useful way of looking at how to verify a user. This is about probability. To improve verification we need to take a leaf out of the fraud detection space. Fraud detection is, for the most part, based on rules and odds — a probability that you are a fraudster. The use of machine learning as a technique in this area is demonstrated by Visa’s investment.

Online identity verification can use the same type of approach. Verification needs to be viewed as this:

  • An ongoing process – make it easier for a customer to get initially verified. They may not have access to all of your resources, but they are now engaged with your brand. As time goes on, and as you build up customer trust, you can request further details, or look at user analytics, and the assurance will build further - in turn, you can open up new areas to the customer.
  • Multiple choice systems – use multiple data sources for verification and give people choice. Mass adopted consumer identity systems need to have options. Not everyone is the same. Not everyone has a mobile phone. Not everyone has a passport.
  • Be smart about verification – probability is a stronger basis for verifying someone online than definitives.

What’s in it for the verified me?

Going back to the figures seen by the UK Government, imagine a commercial company turning away over half of their potential customers. The reasons for this level of failure is likely the onerous nature of the checks done during registration, coupled with the fact those checks have to be done in one fell swoop before any access is allowed. We need to change the UX of verification, use machine learning, and multiple data sources and be smart about how we co-opt our customers into our brand.

Humans have spent the past 100,000+ years designing a system whereby you build up trust (assurance) over time. It hasn't stopped us communicating with new people. But it is a pretty good way to spot fraudsters and know who you are dealing with. And, an added benefit is that the dog you’ve been talking to for the past 6 months, may well turn out to be a dog after all.

This article is published as part of the IDG Contributor Network. Want to Join?

New! Download the State of Cybercrime 2017 report