6 reasons biometrics are bad authenticators (and 1 acceptable use)

Biometrics-only authentication is inaccurate, hackable and far from foolproof.

fingerprint scan biometric security system
Thinkstock

For reasons I don’t completely understand, much of the world seems to be in love with biometrics—not only users of expensive smartphones and laptops, but even seasoned security professionals in charge of guiding the world’s future authentication solutions. I was recently in a consortium meeting dedicated to establishing the world’s security standards around future authentication and I was surprised to hear how many of the attendees felt that biometrics were the be-all-and-end-all of secure authentication, when that so isn’t the case.

I’ve never been a fan of biometrics, but now I all-out hate them. Why? Because they are horrible at authenticating people. Here is what I mean:

Biometrics are horribly inaccurate

Most people think that biometrics are incredibly accurate, because that’s the way they are sold. You’re told, “No one else has your fingerprints, retina, hand print” or whatever. While that might be nearly true, the representation of how your biometric attribute is stored is nowhere nearly as detailed and unique as the real and true biometric factor being measured.

While your fingerprint might be (nearly) unique in the world, what is stored and subsequently measured during authentication is not. Your fingerprint (or iris, retina, face, etc.) is not stored and measured as a highly detailed picture. What is stored and evaluated is a measurement of various defining characteristics (“points”) of that biometric identity.

For example, your fingerprints are turned into series of points noting where major “rivers and valleys” and sharp changes happen. These big deviations are marked with points, with the overall fingerprint being stored and evaluated looking much more like a star constellation than a real fingerprint.

The device and software that record the original biometric attribute can only collect so much detail. At some point the reader/scanner cannot see the fine detail without it becoming blurred, but in most cases, they can see far more of the detail than would ever be useful. Everyone’s fingerprint has very tiny “micro-changes”, some perhaps part of the actual fingerprint, but more that are temporary instances of cuts, abrasions and wear patterns.

If a fingerprint reader recorded every bit of fine detail that it could, it’s very likely that tomorrow your stored fingerprint would not match your fingerprint today. The same with your face, iris, retina or any other biometric attribute. Instead, biometric readers and verifiers are forced to “de-tune” themselves to be less accurate than they otherwise could be. In fact, this de-tuning is usually so thoroughly implemented that the purported uniqueness of the real biometric factor ends up getting far more matches with other unrelated stored values.

At my full-time company, we have only about 700 employees. Already, we have several fingerprint “matches” that requires the additional match employee to use different fingers than the original person in an attempt to find a “unique” fingerprint.

If the fingerprint reader were better tuned to be able to see the true differences that are certainly there, it would end up causing far too many false-negatives and false-positives. There are already far too many false rejections even with the systems de-tuned on purpose. Have you ever sat behind someone trying to use their fingerprint (or eyeball) to get past a biometrically-protected door? The person puts their finger (or eyeball) over and over in different ways on the reader to get the reader to finally accept it. The reader, for its part, is scanning the presented biometric identity over and over as fast as it can, before it has to issue an approval or denial. Heck, if people knew how many times their submitted biometric identity was evaluated and denied during each submission, they would truly understand how inaccurate biometric systems really are.

Biometrics can’t be done by everyone

If you have ever done large-scale biometric systems—tens of thousands to hundreds of thousands of users—then you will learn that for a variety of reasons that you will have people who will never be able to use a particular biometric attribute to authenticate. I’m not talking about people with glass eyeballs or no fingerprints, but people who, for reasons we don’t always understand, can never get their submitted biometric entry accepted on subsequent submissions. They seem never to match.

There is something unique about their bodies that makes their biometric attribute change so much, possibly minute-to-minute, that they can never successfully be authenticated using a particular biometric attribute view. These people are made exceptions to the system and allowed to authenticate using some other method.

Biometrics are not secret

Biometrics are not secrets like a password or private encryption key. Your biometrics are often all over the place (e.g., fingerprints or face) or can be captured by anyone following you around. There is no other type of authentication that is so readily visible and accessible. This leads to the other issues.

Biometric data is easy to copy

The biggest problem with a non-secret authentication factor is that they are easy to copy for malicious reuse. Your fingerprints and face are literally everywhere and can easily be captured, copied and reused. Once they have been captured by someone else, how can any system relying on those biometric attributes trust that you are who you say you are?

For example, in June 2015 more than 5.6 million U.S. fingerprint records were stolen by a Chinese advanced persistent threat (APT). Anyone who had ever applied for a U.S. security clearance had their fingerprints stolen. Mine were stolen. My wife, who  as a teenager in 1984 had applied to work in a shipyard, had her fingerprints stolen. Everyone working in the FBI, CIA and NSA had their fingerprints stolen. Our spies could now be identified by their fingerprints.

This was a big deal at the time, and even President Obama got involved, and after some negotiations, the Chinese government arrested some hackers and purportedly gave us back our fingerprint databases. I’m sure they didn’t copy them before they handed them back to us…yeah, right!

I’ve given my fingerprints many times to different entities, including the U.S. government, the state sheriff (as part of my concealed weapons permit), and to my employer when I was applying to work on government project. In each of those scenarios, I had to sign a document that acknowledged that my fingerprints would be shared with other agencies. Even though I’ve committed no crime, I’m sure my fingerprints are in the U.S. national law enforcement fingerprint database, the Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is accessible by hundreds of thousands of organizations and agencies. If any of them are compromised (I’m sure plenty have been), my fingerprints can be stolen from there.

A picture of my face can be captured from not only anywhere I am, but also from any of the facial recognition databases that agencies around the world are creating. The FBI can access over 400 million faces in various databases they have access to.

My iris and retina images can be stored and captured at any system that has captured them in my life time. Those images can be stolen from any device where you use a biometric to unlock your device. It’s even worse than that. Several groups are developing “inclusive” biometric databases, which will hold every biometric sample that is needed to represent the entire population of possible biometric entries. The idea is to allow these organizations to present and use a biometric entry that will be accepted by biometric systems as if the legitimate user presented them.

When I think about this, I picture a James Bond-type spy who when faced with some biometric logon prompt, holds up his USB key-sized biometric presenting device and logs on as the intended user. An “iPod of biometric” attributes—one device to be anyone.

Biometrics are easy to fool

I’m tired of hearing how hard a particular biometric system is to fool. The vendors tout these supposedly impossible-to-hack systems, often requiring 3D or temperature sensors, to make it “impossible” for a copied/stolen biometric attribute to be reused by another person. Then, within 24 hours, it seems some kid has a video on YouTube showing how they easily fooled the biometric reader with a fake, often on the cheap. I’ve been involved in a few biometric hacking projects and no biometric product I have ever evaluated was nearly as good as advertised. Old attacks using Silly Putty or cardboard pictures, often worked. I once blew hot air across a fingerprint reader only to have the last person’s fingerprint oils re-activated to allow me to successfully logon as them.

Vendors will tout very tough-to-fool biometric scanners, but these are also usually very tough to use, even for legitimate users. They are slower and result in far more false-negatives than other vendors’ more acceptable products. The overall accuracy claims of biometric products are a sideshow carnival sold to unsuspecting admins who don’t really understand how they work. Or they do understand how they work but accept that they aren’t all that accurate.

Biometric vulnerabilities are more true in remote scenarios

Let’s suppose that we can live with inaccurate biometric identities. In today’s world, they seem to work. Tens of millions of people use them to successful authenticate each day, but only because they work OK in two major types of scenarios.

The first is on devices like your cell phone, where you really don’t need the very best security. Sure, you want to protect the information on your cell phone, but it’s not like your organization’s most precious information is stored on it. If an attacker gets it, they are far more interested in wiping your phone and reselling it than for the information it has stored on it.

The second scenario is biometrics used within a physical building where you work. It works because an attacker is less likely to show up, in-person, posing as you, in a place where you work every day. Showing up and trying to authenticate means they might be arrested. Because of that, your biometric authentication seems to have a good security/usability trade-off.

If the world went largely biometric, you can bet that the hackers would start stealing, storing and selling biometric identities. Much of the authentication we do today is for remote logons. I don’t physically show up to each web server I log onto today. If I’m allowed to log onto a resource remotely using biometrics, you can bet hackers will be all over that. Hackers would use your biometric just like how they steal or guess your logon name and password and use it against many websites without a chance of being caught and punished.

Once they have your real biometric identity, what’s to stop them from using it for eternity? You can change your password. You can get a new multi-factor authentication token if it gets compromised. What are you to do if your fingerprints are stolen? There is nothing you can do. You have to live with the fact that your biometric attribute is compromised for the rest of your life and, I hope, let the systems that rely on that same biometric attribute know that it has been compromised.

What biometric authentication I accept

I feel better about biometrics when they are paired with at least one other secret authentication factor that isn’t a biometric attribute—for example, when I see fingerprint swipers that are required to also put in PINs or plug in a smartcard. I’ll accept biometrics as an identifier, like typing in your logon name or email address, used as an act of convenience, but not as a sole authentication secret to prove that who is logging on is you.

Copyright © 2019 IDG Communications, Inc.

Get the best of CSO ... delivered. Sign up for our FREE email newsletters!