Has fraud met its match?

New and dynamic authentication factors can help prevent identity theft.

fraud infog primary
Credit: Thinkstock

Many prognosticators have pronounced privacy a pipe dream. With the mountains of personal information on social networks and the lack of security awareness by many users, cybercriminals have more than a snowball’s chance to grab anyone’s identity.

However, there are new ideas for counteracting identity theft that would take into account a person’s physical attributes to add another layer of security. The idea of using a fingerprint reader to log on to a smartphone isn't new, but the latest wrinkle is to incorporate the pressure with which that finger types on the phone.

More than 41 million Americans have had their identities stolen, and millions more have had their personally identifiable information (PII) placed at risk through a data breach, according to a Bankrate.com survey of 1,000 adults conducted last month.

Keir Breitenfeld, senior business consultant at Experian, said that the continued use of “shared secrets” or static data points, such as Social Security numbers, usernames and passwords, to verify identities and authenticate consumers creates a clear problem for users and companies alike – the perpetuation of fraud. “These pieces of PII are highly valuable making them a top target for cybercriminals. A solution to this problem is the use of dynamic data, either on its own, or in combination with static factors,” he said.

Currently, 1.9 million records containing PII are compromised every day, leaving millions of people vulnerable to fraud. Additionally, according to Javelin’s 2017 Identity Fraud Study, identity fraud impacted 15.4 million victims in the United States in 2016, with the incidence rate increasing by 16 percent from 2015.

Breitenfeld said many companies use a form of authentication called identity element verification and validation. This traditional approach to authenticating individuals uses identity elements (for example Social Security number, date of birth, name, address) provided by an applicant and then compares these data points to data from trusted sources, such as credit bureaus. “Problematically, most of this data has already been stolen, making this form of authentication unreliable,” he said.

Ryan Zlockie, global vice president of authentication at Entrust Datacard, noted that an example of continuous authentication is the amount of pressure applied when typing, scrolling and swiping, which could be matched against the user’s typical behavior. Another authentication pattern could be the time spent on a session or transaction. For example, the timing of the session contrasted with the actions completed can indicate whether answers are quickly being cut and pasted or typed out by hand. Or the cadence of typing can be used as a behavioral authentication tool that collects timing information describing exactly when each key was pressed and released as a person is typing at a computer keyboard. This cadence can be captured continuously, not just when a user first logs into a system or service.

By layering in additional dynamic data that has little to no monetary value for cyber criminals, as opposed to relying solely on static information, companies have the potential to stop fraud, Breitenfeld added. Some of the new dynamic factors include:

  1. Biometrics – Authentication factors such as fingerprints and retina scans can be used to securely verify consumer identities, as these factors are more difficult for fraudsters to steal or replicate.
  2. IP address – Detecting if an account is being accessed from a new/unrecognized IP address can help stop fraud by challenging the user with additional authentication factors. Additionally, users can be notified if someone attempts to access their account from a new device.
  3. Location – Location is another way to verify users, and several companies already use this as an authentication factor for purchases. For example, if you live in Kentucky, but an item is purchased using your credentials in China, the transaction will either be blocked completely or flagged to the appropriate people.
  4. Selfies – Facial recognition software can be used to authenticate someone making transactions on his or her mobile device.
  5. Velocity checks - Checking the historical shopping patterns of an individual and matching that record against his or her current purchases for irregularities.
  6. Social media profiles – Analyzing a person’s social media and online accounts help identify whether they are real. For instance, someone whose Facebook profile has been established for years with a high number of friends and consistent profile information is more likely to be authentic than someone with a profile that lacks breadth and depth, which can signify a false or newly created identity.
  7. Authorized user activity – Monitoring identities that are being added as “authorized users” to accounts is often predictive of fraud, specifically account takeover and the creation of synthetic identities. If the same “authorized user” is being added as a new authorized user to accounts for various different people, it is likely a fraudulent identity.

Zlockie added another factor to examine is hack attack pattern matching, which can show an account takeover attempt by monitoring to see if a user is rushing through the process and matching the speed of the attempted hack with similar attacks. He said the mobile push and transaction signing is not a new authentication tactic, but it’s more secure than dated approaches that rely on passwords or static credit card CVV codes. It’s more than just a way to authenticate to an application, as it can be positioned and applied to a variety of workflow automation use cases.

Besides facial biometrics, there is also voice and iris settings that can authenticate individuals based on their inherent physical traits. “Biometric authentication has expanded beyond the fingerprint for good reason thanks to the fact that biological traits are non-transferrable and provide a high level of protection against fraud. Voice and facial biometrics are flexible in the fact that they can continually authenticate users throughout a session without alerting them that they’re being monitored,” Zlockie said.

He took the physical aspect a bit further in citing the use of an electrocardiogram (ECG), heartbeat or BioStamp that can turn a user’s heartbeat into a unique differentiator that authenticates his or her digital identity. Whichever system or service a person uses could gain real-time access to their vital signs in order to verify the user throughout the entirety of a session or transaction.

Zlockie said cognitive authentication is still in the research stages, but it collects multiple parameters to create a unique user profile. When a person is presented with a novel stimuli, like a familiar photograph or song, it measures his or her response using a variety of techniques like EEG, ECG, blood pressure volume, electrodermal response, eye trackers and pupillometry. Cognitive authentication would then validate the user by matching the response to pre-recorded metrics.

What’s next?

Looking further into the future, to truly devalue data the industry needs to consider a more comprehensive approach to identity authentication – a hub of identities, Breitenfeld said. This centralization of information would combine dynamic factors with PII to create a centralized “consumer identity.” Companies would then request authentication of that specific identity, rather than requesting, sharing and ultimately storing consumer PII. “This removes the burden from a company collecting and being responsible for consumer PII that is unnecessary to the transaction, and having to risk the potential of being hacked and dealing with the consequences,” he said.

Don’t credit card companies already block irregular purchases though?

Breitenfeld admits that is the case, but what he espouses is the consistency of identity elements being used to open an account. “At Experian, we analyze more than 3 million identity transactions (not solely financial) a day, and over time we can start to see if elements like names, addresses, SSNs and dates of birth are being used consistently or not,” he said. “For example, if we see that one specific person’s information is used at a relatively normal velocity and consistency, and we can verify their identity, that’s a low risk of fraudulent activity. If, however, we begin to see that person’s name with five different addresses and SSNs, or we’re seeing high velocity of any one of these elements, that’s a bad sign. Overall, we’re looking for the consistent use of identities; deconstructing them down to the element-level enables us to see if they’re being used to perpetrate fraud.”

Experian also uses device risk assessments, a combination of specific device attributes, habits and associated identity elements, to verify the identity of the person making a purchase or logging into an account. For example, geolocation (a device attribute) helps ensure that the person is conducting a transaction (making a purchase or logging in) from an expected and/or regular location.

Another example of information that could be part of an identity hub is the attributes of operating systems. What if the language of the device’s operating system does not match what is expected for that specific identity? If one unique device is associated with different locations and languages, in addition to different personally identifiable information, there’s a clear problem, he said.

“So, this information, combined with their regular habits, creates a baseline of what people typically use their device and then compares that data to identify deviations. Everything from the resolution of the screen to the exact version of an operating system are device attributes that can help identify if an account is being used fraudulently,” Breitenfeld noted.

Multi-factor authentication is a common method of verification that uses step-up authentication treatments. These include knowledge-based authentication questions (such as security questions), one-time passwords and document verification such as selfies, e-signatures or application form fills to certify that a user is authorized to conduct a transaction.

While this method has been used for years, traditional multi-factor authentication is not as secure as some might think, Breitenfeld said. For example, when a code is sent via text, there is no way to know if the correct user is seeing the text. The phone may have been stolen or a criminal could be using a technique called mirroring to receive texts sent to a cell phone. This authentication method can be improved by adding more dynamic data, such as a selfie, to the process.

The selfie example would work such that when someone fills out an application for a product or service, he or she submits a picture of his or her driver’s license, displaying the driver's name, date of birth and address. This information is scraped from the photo and used in the application form, and it is verified by capturing the static ID info. Later, if someone has trouble logging in and fails to answer security questions, the system could ask for a selfie to compare to the user's photo already on file.

Experian also uses fraud models that enable verification processes to run a user through a variety of known fraud patterns and determine if there should be additional verification prior to confirming the person’s identity. For example, the model takes into account multiple factors that are common among cybercriminals to create profiles that help identify potential fraudsters. A consumer’s identity elements are compared to this model to determine risk for companies. A fraud model can also be adjusted to meet a company’s desired risk threshold; this frequently occurs during the holidays when consumers are making more purchases, and companies do not want to be a barrier to transactions – although the purchase behavior is abnormal for the individual.

Experian also uses consortium files that are shared records with verified and updated fraud lists. They are collected by various entities, including banks, credit card companies, telecommunications providers and other lenders, and used to support participating organizations in stopping regular fraud offenders. Information shared could include high-velocity SSNs, addresses that are established as fraud mail-drops or risky locations, recycled phone numbers, and repeat physical addresses and email addresses that are associated or connected to existing fraud records.

Generally, these files would be managed or housed by a trusted third party, such as a credit reporting agency. Data would be collected from multiple sources, such as banks, and the credit agency would allow access to these files in real-time.

How would you defeat fraud? Go to Facebook to add in your thoughts.

To comment on this article and other CSO content, visit our Facebook page or our Twitter stream.
Healthcare records for sale on Dark Web