Americas

  • United States

Asia

Oceania

Contributor

Reducing the impact of AI-powered bot attacks

Opinion
Apr 02, 20185 mins
Artificial IntelligenceCyberattacksHacking

Fraudsters are harnessing AI to behave like humans, trick users and scale-up attacks.

artificially intelligent, robotic worker
Credit: Thinkstock

Bot attacks are drawing more and more headlines with tales of identity theft. The wealth of consumer data available on the dark web through breaches, social media and more are sold to hackers to compile online consumer profiles to take over accounts for money, products or services.

The question of who is real and can be trusted, and how companies should defend against this issue remains unanswered. For next generation bot detection solutions to be effective, there is a need for much higher precision in the level of user behavioral analytics that must be implemented.

Automated versus AI-powered bots

Most people are familiar with automated bots – chatbots and the like – that are actually software applications that can use AI to interact with human users to accomplish a task (i.e. book a hotel, answer customer service questions, etc.), though some are simply rules-based.  However, advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behavior.

Companies like Amazon have been investing in AI and machine learning techniques for a number of years, from fulfillment centers to Echo powered by Alexa, to its new Amazon Go. Amazon’s AWS offers machine learning services and tools to developers and all who use the cloud platform. But malicious bots can now leverage these exact capabilities for fraudulent purposes, making it difficult to tell the difference between bots and true human users.

Hackers and fraudsters are harnessing the latest tools available, and constantly changing their techniques to make their attacks more effective, faster and adaptable to safeguards. This makes it almost impossible for security teams to model attacks and attack behaviors. Mobile bot farms where bots are implemented across thousands of devices to appear more human-like are just one example. Malicious algorithms can be introduced to AI-powered bots with the aim to replicate real-world audiovisual signatures to impersonate true users and unlock security systems. These bots find various ways to extract money from websites and accounts – including account takeover, where large bot collectives crack passwords and test stolen credentials (passwords, social security numbers, etc.) as quickly as possible in order to break in to user accounts. Those same stolen credentials from various accounts can also be pulled together and used to create entirely new, synthetic identities, opening the gates for an entirely new method of identity fraud.

These tools are the product of advancements in AI techniques like deep learning – a subset of machine learning that has networks which are capable of learning unsupervised from data that is unstructured or unlabeled.

AI is also being used to easily scale up pre-existing attacks by enhancing the human labor required to carry them out – enhanced captcha breaking systems, faster identification of vulnerabilities in existing defense systems, faster creation of new malware which can avoid detection, and more effective selection of phishing targets by collecting and processing information from a large number of different public domain sources. Using AI agents, these attacks would be made much more precise, targeted and amplified – creating a multiplier effect to a malicious campaign and expanding the reach of potential victims.

“Un”-predictive analytics will lead the charge in reducing the impact of attacks

While prevalent in the financial industry, these attacks have the potential to impact many more. For instance, with online ticket sales, an AI-powered bot could perform check-out abuse by pretending to be a human customer, then buying out all the tickets for an event within a minute. Similarly, the ad tech industry continues to suffer major losses thanks to ad fraud. In 2016, it was estimated that nearly 20 percent of total digital ad spend was wasted, and $16.4 billion would be lost in 2017. Click-fraud also presents an issue, where bots repeatedly click on an ad hosted on a website with the intention of generating revenue for the host site, draining revenue from the advertiser.

If machines can be taught to behave like humans, how can we stop them? Traditional approaches currently used to block these attacks have proved ineffective, such as task-based authentication where a “user” is asked to perform a specific task like CAPTCHA. Such approaches only focus on known fraudulent behaviors, rather than continuously adapting and learning as patterns change.

Similarly, current solutions have attempted to determine bot behavior from human behavior, which are almost too nuanced to decipher. Instead, solutions should focus on finding anomalies in real user behavior and fighting AI with AI. Using machine learning and AI algorithms, it is possible to continuously learn patterns of user behavior based on the muscle memory they exhibit when they walk, sit, stand, type, swipe, tap – even the hand they prefer to hold their device in can be used to create personalized user models.

Using these traits can enable solutions to pick up on the smallest deviation in “normal” user behavior and can be flagged immediately as a potential fraud attempt. It is important to combine this with other environmental traits including the links between device, network, social, location and biometric intelligence. Dynamic layers of sophisticated user models will need to be implemented in order to stay ahead of AI-powered bot attacks.

Contributor

Deepak Dutt brings over 18 years of technical and entrepreneurial expertise in bridging the technology and business worlds. He has a range of experience from small software companies to large worldwide software and telecommunication companies and a reputation for going after tough tasks, hard goals, and accomplishing what the business needs to thrive.

Prior to his latest venture, Deepak Dutt worked in various roles in new venture development, R&D, product management and cyber security at Nortel, Siemens Telecom Innovation Center, and Newbridge Networks. He was awarded the award of excellence by the Nortel CEO in 2003.

Deepak has a successful track record in his entrepreneurship path where he co-founded a software company in India, Intsyx, specializing in eLearning simulation which was subsequently acquired by an Ottawa based firm, bringing him to Ottawa at age 22. As the CEO of Zighra, an emerging leader in mobile security and fraud prevention solutions, he has expanded the business globally with operations in Canada, U.S., UK, Middle East and India.

The opinions expressed in this blog are those of Deepak Dutt and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.