• United States




3 ways device fingerprinting must evolve to prevent fraud

Jul 22, 20164 mins
Endpoint ProtectionInternet SecuritySecurity

Advanced techniques such as fuzzy matching, reverse engineering and predictive modeling will make device fingerprinting more effective at fighting fraud online

Fraud is a $1 trillion annual problem worldwide. With rapid growth in ecommerce and online banking over the past decade, fraudsters are increasingly shifting to using computers and smartphones to commit fraud. One technology that helps companies and governments spot fraud—and sometimes stop it before it starts —is device fingerprinting.

Device fingerprinting works by uniquely identifying computers, tablets and mobile phones based on various attributes (e.g., browser version, screen dimensions, list of installed fonts, etc.). So, if a fraudster were to commit fraud using a particular mobile phone and was caught and that phone was fingerprinted, it would be difficult for that fraudster to commit another transaction from the same device. However, the fingerprint changes every time a user makes a device update. It’s therefore incredibly easy to fake a new device fingerprint.

+ Also on Network World: How sound-fingerprinting could spot grid attackers +

On top of that, the whole concept of finding fraudsters using device fingerprints is totally reactive. Even if a device is effectively fingerprinted, it must first be blacklisted for bad behavior at least once before being blocked from future access.

With those limitations in mind, it’s important for fraud fighters to identify ways to improve fraud detection, in part by extending device fingerprinting capabilities into the following three realms.

3 things to include in future device fingerprinting

The future of device fingerprinting should include the following:

1. Fuzzy matching

With the understanding that the fingerprints of most users’ devices will change over time, the next step is to figure out which changes to which component, application and configuration that are used to compute the fingerprint are OK to ignore. Often changes on the same device can generate different fingerprints but aren’t indicators of fraud. If two distinct fingerprints differ only by one component, i.e. fonts used on browser, fraud data scientists should be able to reliably assume that the two fingerprints are from the same device. If two distinct fingerprints differ by the operating system of the device, fraud data scientists should be able to predict that the two fingerprints are from different devices.

2. Reverse engineering

A huge limitation of device fingerprinting is how easy it is to fake a new fingerprint. For example, FraudFox is a deterministic program that spoofs the signals of its users according to certain rules, defeating static fingerprinting. Fraud detection data scientists should be able to detect patterns in how FraudFox alters signals and effectively reverse engineer its algorithms to detect when a device’s signals have been artificially changed.

Ultimately this will turn into an arms race, with FraudFox tuning its algorithms to mimic good users and fraud detection data scientists revising their detection models to differentiate between artificial and organic changes. But thankfully fraud fighters have greater resources.

3. Predictive modeling

As mentioned previously, standard device fingerprinting alone won’t stop fraudsters the first time around because that device has yet to be blacklisted. In the next evolution of device fingerprinting technology, the days of a centralized list of blacklisted devices becomes moot. Device fingerprinting of the future will predict whether a device will be used to commit fraud even if it has never committed fraud before. More impressive, the new technology will be able to identify suspicious devices even if that device is brand new and has never connected to the internet before.

How? Fraudulent devices often share patterns in their set of signals. For example, they are five times more likely to have flushed their browser referrer history or have null values in browser settings. A device’s set of signals isn’t just a passive dataset that can be matched to another set of signals to determine whether two devices accessing an app are in fact the same device. That set of signals tells a story about the device and the user behind it.

The device fingerprinting of the future will detect these suspicious devices as soon as they open an app—before they have a chance to begin any fraudulent activity.

By evolving to include capabilities for fuzzy matching, reverse engineering of fraud tools, and predictive modeling, new fraud models will reflect a form of device fingerprinting that aligns with an increasingly broad definition of fraud. And it will create a need for increasingly specialized tools to keep users and businesses safe.


Rahul Pangam is co-founder and CEO of fraud-detection startup Simility, which has $7.2 million in seed funding led by Accel Partners and Trinity Ventures and dual headquarters in Palo Alto, Calif., and Hyderabad, India.

Founded in 2014, Simility is already analyzing millions of transactions per week for customers on four continents as part of a limited beta release of its online fraud-detection platform.

Prior to Simility, Rahul was a director at Google, where he led a global team of 200 that reduced fraud in ads by 90 percent. He is a fraud-detection industry veteran, having spent more than six years at Google building teams responsible for fighting fraud and abuse in Google’s ads and its local and social products.

Prior to Google, Rahul was a lead engineer at General Electric, working on GE’s smart grid software products.

Rahul holds an MBA from the University of Michigan and M.S. in electrical engineering from Clemson University.

The opinions expressed in this blog are those of Rahul Pangam and do not necessarily represent those of IDG Communications Inc., or its parent, subsidiary or affiliated companies.