Americas

  • United States

Asia

Oceania

Contributor

Why you shouldn’t use your face as your password

Opinion
Aug 28, 20185 mins
iPhoneMobileMobile Security

You can now use your face to lock your smartphone. But just because you can doesn’t mean you should.

facial recognition access identification biotech
Credit: Getty Images

Companies like Apple and Samsung are replacing fingerprint scanners on smartphones and tablets with facial recognition systems. While that makes design sense, does it also make security sense?

What’s driving this change is the desire to make premium phone with what is known as ‘edge-to-edge’ display. This means the front of the phone is just screen, free of the frame (known as bezels) around it. However, without bezels there’s no place for the fingerprint sensor on the front of the phone. Samsung and others have tried moving it to the back of the phone. On the Galaxy S8 it’s right next to the camera lens which frequently gets smudged when using it. Also, it’s just not as convenient as on the front of the phone. With consumer tech security, convenience is everything.

The other possible solution is integrating the sensor into the screen itself. That has turned out to be no simple thing. Sensing the fingerprint beneath the glass of the display makes it significantly harder to get the quality of the image needed. Until that issue is solved companies are turning to facial recognition to get the job done. Does it?

Unfortunately there are several problems inherent in both technology and faces that suggest the answer is no.

The first is that unlike fingerprints, faces change. This can be the result of age, facial hair, illness, and/or gaining weight, it doesn’t matter – they all make it more difficult for facial recognition to work well. In There’s also the issue of how the face is seen: While your facial features are intrinsic properties, the appearance of your face is subject to several factors, including pose (or camera viewpoint), illumination, facial expression, and occlusions (sunglasses or other coverings). In unconstrained scenarios where face image acquisition is not well controlled, or where subjects may be uncooperative, the factors affecting appearance will confound the performance of face recognition.

Moreover, there may be similarities between the face images of different people, especially if they are genetically related. Such similarities further compound the difficulty of recognizing people based on their faces.

And this is before you get into the very well-documented problems facial recognition has with race and gender. As Joy Buolamwini of MIT and Timnit Gebru of Microsoft found in their research of three commercial software systems: “Darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.” This problem is so persistent that Microsoft is calling for government regulation to deal with it and when is the last time you heard of a tech company doing that?

Then there’s the issue of lighting and smartphone facial recognition.

Cameras on the screen side of phones aren’t as powerful than those on the back. This makes them more reliant on good lighting to produce a quality image. Backlighting in particular poses a big problem. Apple’s iPhone X used special illuminators to counter this with varying degrees of success in its FaceID system. Some reviewers reported having problems using it in direct sunlight but noted that overall it performed better than expected.

Samsung is hoping to improve facial recognition by including a type of iris scanner with its latest devices. The entire system is named “Intelligent Scan” and includes what the company calls Eyeprint Veri­fication. It works by first scanning your face and then moving on to the iris if authentication initially fails. If conditions aren’t great for using either of those, it then combines them to unlock your device. It isn’t clear from the company’s literature whether this system uses true iris scanning, which is very secure. However, it is telling that the company is choosing to include a second biometric recognition element rather than just relying on facial.

Facial recognition is likely the easiest type of biometric to spoof. Early versions on phones were fooled by a photograph. Apple’s FaceID now uses 3D depth maps to register and verify the physical features of the device holder. This makes it considerably harder to fool at it requires hackers to reproduce a physical representation of a target’s face. It also uses machine learning to analyze your expression whenever it sees your face, this allows it to determine whether it’s an authentic unlock attempt. Further, it doesn’t work if you’re not awake. Even with all that Apple still provides another security check, requiring a good, old-fashioned pin code to prevent someone from siphoning data from a phone unlocked with FaceID.

The ubiquity of photographs means that likely as not there’s a photo of you on the internet, accessible by anyone who cares to look for it. Because phone cameras keep improving it is even likely that these photos are high-resolution. That makes it much easier for someone you don’t know to develop a spoof that can fool a facial recognition system. By contrast, few people have fingerprint images that are available online and far, far fewer (possibly none) have iris or retinal scans online.

All of this is why people should definitely hesitate before going over to any system that relies solely on facial recognition. Facial works best a part of a multi-factor authentication approach. Even then, though, it is a far weaker factor than either fingerprints or iris and retinal scanning.

Contributor

John Callahan, Chief Technology Officer at Veridium, is responsible for the development of the company’s world class enterprise-ready biometric solutions, leading a global team of software developers, computer vision scientists and sales engineers. He has previously served as the Associate Director for Information Dominance at the U.S. Navy’s Office of Naval Research Global, London UK office, via an Intergovernmental Personnel Act assignment from the Johns Hopkins University Applied Physics Laboratory. John completed his PhD in Computer Science at the University of Maryland, College Park.

The opinions expressed in this blog are those of John Callahan and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.