Americas

  • United States

Asia

Oceania

Contributor

Deepfakes and synthetic identity: More reasons to worry about identity theft

Opinion
Oct 02, 20196 mins
AuthenticationFraudIdentity Management Solutions

How can we maintain control over digital identity In a world where it is being blurred and abused by fraudsters?

An insurance company recently reported a successful scam against one of its clients that was a new and improved version of CEO fraud. A British CEO was tricked into transferring $240,000 to a fraudster. The trick used the technique known as a deepfake to make the CEO believe he was dealing with a legitimate person.

Some doubt the insurance company’s account, but deepfake technology is perfect for scams because it leverages the trust we have in our relationships. Trust is a crucial aspect of any transaction both off- and online, which arguably makes deepfakes the most dangerous addition to the cybercriminal’s toolkit when it comes to identity theft.

Identity theft storm a-brewing

Digital identity is a market driver and, according to McKinsey, increases overall economic value of a country by up to 13%. However, Javelin Strategy reported that in 2017 16.7 million US adult identities were stolen. In 2018, imposter scams were the most frequent complaint made to the Federal Trade Commission (FTC).

Digital identity works both ways. One way is the legitimate use for consumers and staff to carry out online and offline tasks. The converse use is when digital identity is used fraudulently to trick us as individuals and our businesses.

This dichotomy is because of the power invested in digital identity as it becomes ubiquitous. We are now performing more verification and anti-fraud checks on individuals to reduce fraud. Yet at the same time, cybercriminals are developing other ways of skinning the identity cat. As new technologies like facial recognition emerge to augment our identity offerings, new challenges in security will emerge, too.

Synthetic vs. real identity theft

The two distinct cases where our digital identity services facilitate fraud are verified identity and synthetic identity.

What is verified identity?

A verified identity is increasingly being tied to verification and authentication technologies to create assurance levels to perform valuable transactions. This is a costly exercise for the service performing these checks. A Thomson Reuters Know Your Customer (KYC) survey estimated the KYC process costs around $500 million per year. Also, the KYC process is time consuming for the consumer, and getting online verification right is a tricky business. However, a checked identity can be a valuable asset for both consumer and service, if the identity is re-usable across federated services.

A verified (real) identity is, by the same definition, also a valuable commodity to a cybercriminal. This is evidenced by the fact that ID fraud plagues financial transactions with 2018 seeing a 55% increase in online fraud.

What is synthetic identity?

Synthetic identity is the technique of creating “blended” identities. Synthetic IDs are cobbled together from fragments of identifying data that are picked up from data breaches. The first half of 2019 saw a 54% increase in data breaches, so there is plenty of data to choose from. Cybercriminals also use a process of “channel separation” whereby they take multiple pieces of ID data–e.g., names, birth dates, Social Security numbers–and mix them up to avoid detection.

From a fraud perspective, which is best: Verified or synthetic? The answer, of course, is it depends on the scam. If a cybercriminal can use a verified identity, they are more likely to succeed in the crime. However, verified identities are hard to take over if created in a robust manner. Fraudsters have ways to create their own verified but fraudulent identities. This is a challenge to the services offering high assurance level IDs.

Synthetic identities are successful, but a real, verified identity is a very useful and powerful piece of kit. Better still, if a fraudster can use the very system that is supposedly assuring the identity to create verified but fake identities, then all the better.

Deepfakes and verified identity

Identity and biometric data like facial and voice recognition are being increasingly used in digital identity creation and use. Facial recognition is being used across many facets of digital identity use cases. This includes identity checks for registration of an account, an example being mobile app vendor, Yoti, which uses facial recognition during registration to perform an identity check. 

Facial recognition is also being used for transaction authentication as in the case of AppleID. Voice is being used in digital assistants like Amazon Echo for identity data sharing from companies like Avoco Secure.

Voice and face are a natural fit for consumer digital identity use cases. Biometrics can make the user experience easier and even fun. However, biometrics is also a natural fit for fraudsters. Deepfake technology takes biometrics and turns it on its head, using the inherent trust in a face or a voice to build better and more effective phishing campaigns. Unless we put structures in place, deepfakes will wreak havoc on the digital identity industry.

The time is not far off when deepfakes will be used to create a verified credential that is then used to build a high assurance identity. It will take synthetic IDs to new heights and cross the chasm into verified identity land. Once that happens, online transactions could be even more open to abuse. In effect, they will be fraudulently legitimate.

Leaving identity fraud behind

Digital identity, by its very definition, is a central point of failure in any online transaction. Ensuring high levels of assurance within the complex structural requirements of consumer systems is a serious challenge. The requirement of delegated account access and sharing alone poses challenges in this area.

If we continue to incorporate face and voice biometrics into our digital identity platforms and services, we must do so with deepfake fraud in mind. We must continue to harden our digital identity application design against all manner of attacks. Deepfakes will create challenges for those systems that use them to verify and perform identity checks and those that use biometric-assisted transaction authentication.

Face and voice offer digital identity system designers a user-accessible way to verify and transact. They offer a much-needed option for some disabled users too, removing the need for a keyboard. However, we need to ensure that “biometrically secure by design” is part of our remit.

Contributor

Formerly a scientist working in the field of chemistry, Susan Morrow moved into the tech sector, co-founding an information security company in the early 1990s. She have worked in the field of cybersecurity and digital identity since then and helped to create award winning security solutions used by enterprises across the world.

Susan currently works on large scale, citizen and consumer identity systems. Her focus is on balancing usability with security. She has helped to build identity solutions that are cutting edge and expanding the boundaries of how identity ecosystems are designed. She has worked on a number of government based projects in the EU and UK. She is also interested in the human side of cybersecurity and how our own behavior influences the cybercriminal.

The opinions expressed in this blog are those of Susan Morrow and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author