Americas

  • United States

Asia

Oceania

Contributor

Abuse: a new category of threat

Opinion
Dec 15, 20175 mins
AuthenticationData and Information SecurityIdentity Management Solutions

There are many different ways for users to abuse accounts, the solutions to which rack up costs and create drag on internal resources. And some fixes may cause more problems than they solve.

facial recognition - biometric security identification - binary face
Credit: Thinkstock

You have a problem. There is no way to confirm the legitimacy of the entities passing through your login page—and yet they have unfettered access to the account, payments, and/or community. You can’t lock everyone out. Eventually, abusers find their way in, and start causing serious harm to your real users, brand, and bottom line. As a result, your whole enterprise is made vulnerable to a spectrum of threats collectively called “abuse.”

Abuse is a wide-ranging category that resists easy taxonomy. Defined as using technology to cause harm, it can include:

  • Account takeovers
  • Fake account creation
  • Publishing content that is offensive, inappropriate or illegal
  • Online harassment, bullying or threats
  • Serial terms of service violations
  • Business logic abuse
  • Using your site to spam others
  • Using your service for phishing attacks

Unlike other well-defined threat categories with existing solutions, abuse can’t be mitigated with existing solutions like firewalls, antivirus software and patching.

How abuse relates to your business

Unless you operate completely offline, you deal with abuse in some shape or form. There are three categories of abuse that could apply to your company: account abuse, payment abuse, and community abuse.

Account abuse

Perhaps the best-known form of account abuse is the account takeover (ATO), and for good reason: bad actors with valid/stolen credentials can do a world of harm. They can access sensitive information, wire themselves your money, and take your photos (or even the title to your house). Another kind of account abuse involves “fake accounts,” where people create false logins and sell them as users to businesses looking to buy leads or affiliates.

Payment abuse

In one common scam, abusers break into accounts to make purchases with stored credit card information. Most are familiar with this form of payments abuse, which overlaps with account abuse as well.

There are other forms of payments abuse. For example, credit card thieves may set up accounts as nominally legitimate users to test credit cards in bulk, sorting out valid ones from any that might have been cancelled.

They might also take packages of stolen credit cards with partial information, and use automated programs to “guess” the missing info by attempting transactions, with each card, until the correct combination occurs.

Community abuse

Community abuse ranges from hate speech and cyberbullying on social networks, to romance scams committed on dating sites, to spam campaigns delivered through messaging services and peer-to-peer collusion to commit fraud on buyer and seller marketplaces like eBay, Craigslist, and Airbnb.

Old information security threats, like phishing and malware dropping, have found a fruitful new vector in social posts and direct messages. Any area online where users congregate provides opportunities for abusers to commit a litany of offenses, causing harm to users, the business, and the brand.

Why care about abuse?

Arguably the most damaging example of account abuse (so far) happened to Apple in 2014. When Apple released iCloud, it never imagined the cloud storage product would bring about some of the brand’s most embarrassing headlines. But that’s exactly what happened in August of 2014, when nude images of the world’s most recognizable celebrities began spreading across the internet.

Abusers seeking private images of famous people stole iCloud login credentials from celebrities via phishing emails. As authenticated users, they had access to iPhone images taken over the years—including, in some cases, nude photos.

The incident dominated news cycles for weeks. It was covered in the same manner as the data breaches suffered by Target, Yahoo, and Equifax. Except in those cases the attackers spent months infiltrating those organizations and made off with millions (or in Yahoo’s case, billions) of personal records. Guess how many images it took for iCloud to get its own embarrassing Wikipedia page? Only 500!

The drawbacks and common mistakes of fighting abuse

Most companies deal with abuse using a handful of methods, each with its own set of drawbacks. The most low-tech and intuitive method is having manual review teams review everything, which works until caseloads became overwhelming.

Some take the engineering approach, building their own machine learning system or custom rules engine so they don’t have to manually review everything. While unsupervised learning does manage to catch things human moderators haven’t seen, the price is too many false positives. Although supervised learning promises precision, to build the models and train them properly you need a lot of time and very specialized resources—including someone who has mastered both machine learning and InfoSec.

In hopes of stemming the flow of abusers entering the front door, some customers add extra steps into their signup flow, like SMS verification. They make authentication harder to spoof with two-factor authentication—but in the end, those measures harm the business by stemming the flow of signups and user engagement.

Many who deal with payments abuse bring in anti-fraud vendors that use machine learning to produce an opaque fraud score. However, organizations of any scale quickly realize that those technologies are only useful for a small part of their overall problem. In the real world, abuse isn’t always black and white—it also comes from “grey users” that may not be threat actors, but cause harm in other ways and violate terms of service. Because grey users don’t create signals that machine learning models designed to catch payment fraud recognize, their behavior tends to slip through.

As many enterprises have found out, each of these solutions racks up costs and escalates drag on internal resources. Some may even cause more problems than they solve. In a future article, we will talk about what CSOs can do to fight abuse. Examples include becoming an expert in abuse, and learning how to stop abuse without sacrificing growth.

Contributor

Pete Hunt is co-founder and CEO of Smyte, a cybersecurity startup based in San Francisco. Prior to founding Smyte, Hunt led the Instagram web team at Facebook and built Instagram’s suite of business analytics products. Before that, he was one of the original members of React.js, Facebook's largest open source project, and was key to taking it from an internal tool to a massive open source library.

Hunt earned a B.A. in Information Science and Masters in Computer Science from Cornell University, where he was also Sigma Phi Epsilon Vice President of Recruiting, Varsity Heavyweight in Rowing, and WVBR Radio DJ.

The opinions expressed in this blog are those of Pete Hunt and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.