You have a problem. There is no way to confirm the legitimacy of the entities passing through your login page\u2014and yet they have unfettered access to the account, payments, and\/or community. You can\u2019t lock everyone out. Eventually, abusers find their way in, and start causing serious harm to your real users, brand, and bottom line. As a result, your whole enterprise is made vulnerable to a spectrum of threats collectively called \u201cabuse.\u201dAbuse is a wide-ranging category that resists easy taxonomy. Defined as using technology to cause harm, it can include:Account takeoversFake account creationPublishing content that is offensive, inappropriate or illegalOnline harassment, bullying or threatsSerial terms of service violationsBusiness logic abuseUsing your site to spam othersUsing your service for phishing attacksUnlike other well-defined threat categories with existing solutions, abuse can\u2019t be mitigated with existing solutions like firewalls, antivirus software and patching.How abuse relates to your businessUnless you operate completely offline, you deal with abuse in some shape or form. There are three categories of abuse that could apply to your company: account abuse, payment abuse, and community abuse.Account abusePerhaps the best-known form of account abuse is the account takeover (ATO), and for good reason: bad actors with valid\/stolen credentials can do a world of harm. They can access sensitive information, wire themselves your money, and take your photos (or even the title to your house). Another kind of account abuse involves \u201cfake accounts,\u201d where people create false logins and sell them as users to businesses looking to buy leads or affiliates.Payment abuseIn one common scam, abusers break into accounts to make purchases with stored credit card information. Most are familiar with this form of payments abuse, which overlaps with account abuse as well.There are other forms of payments abuse. For example, credit card thieves may set up accounts as nominally legitimate users to test credit cards in bulk, sorting out valid ones from any that might have been cancelled.They might also take packages of stolen credit cards with partial information, and use automated programs to "guess" the missing info by attempting transactions, with each card, until the correct combination occurs.Community abuseCommunity abuse ranges from hate speech and cyberbullying on social networks, to romance scams committed on dating sites, to spam campaigns delivered through messaging services and peer-to-peer collusion to commit fraud on buyer and seller marketplaces like eBay, Craigslist, and Airbnb.Old information security threats, like phishing and malware dropping, have found a fruitful new vector in social posts and direct messages. Any area online where users congregate provides opportunities for abusers to commit a litany of offenses, causing harm to users, the business, and the brand.Why care about abuse?Arguably the most damaging example of account abuse (so far) happened to Apple in 2014. When Apple released iCloud, it never imagined the cloud storage product would bring about some of the brand\u2019s most embarrassing headlines. But that\u2019s exactly what happened in August of 2014, when nude images of the world\u2019s most recognizable celebrities began spreading across the internet.Abusers seeking private images of famous people stole iCloud login credentials from celebrities via phishing emails. As authenticated users, they had access to iPhone images taken over the years\u2014including, in some cases, nude photos.The incident dominated news cycles for weeks. It was covered in the same manner as the data breaches suffered by Target, Yahoo, and Equifax. Except in those cases the attackers spent months infiltrating those organizations and made off with millions (or in Yahoo\u2019s case, billions) of personal records. Guess how many images it took for iCloud to get its own embarrassing Wikipedia page? Only 500!The drawbacks and common mistakes of fighting abuseMost companies deal with abuse using a handful of methods, each with its own set of drawbacks. The most low-tech and intuitive method is having manual review teams review everything, which works until caseloads became overwhelming.Some take the engineering approach, building their own machine learning system or custom rules engine so they don\u2019t have to manually review everything. While unsupervised learning does manage to catch things human moderators haven\u2019t seen, the price is too many false positives. Although supervised learning promises precision, to build the models and train them properly you need a lot of time and very specialized resources\u2014including someone who has mastered both machine learning and InfoSec.In hopes of stemming the flow of abusers entering the front door, some customers add extra steps into their signup flow, like SMS verification. They make authentication harder to spoof with two-factor authentication\u2014but in the end, those measures harm the business by stemming the flow of signups and user engagement.Many who deal with payments abuse bring in anti-fraud vendors that use machine learning to produce an opaque fraud score. However, organizations of any scale quickly realize that those technologies are only useful for a small part of their overall problem. In the real world, abuse isn\u2019t always black and white\u2014it also comes from \u201cgrey users\u201d that may not be threat actors, but cause harm in other ways and violate terms of service. Because grey users don\u2019t create signals that machine learning models designed to catch payment fraud recognize, their behavior tends to slip through.As many enterprises have found out, each of these solutions racks up costs and escalates drag on internal resources. Some may even cause more problems than they solve. In a future article, we will talk about what CSOs can do to fight abuse. Examples include becoming an expert in abuse, and learning how to stop abuse without sacrificing growth.