This article focuses on the dual effects of AI on cybercrime and its implications for defense. Credit: gorodenkoff The last decade has witnessed rapid adoption of machine learning (ML) and artificial intelligence (AI) technologies across various sectors. More recently, the introduction of generative AI, exemplified by platforms like ChatGPT, has propelled AI into the public spotlight, sparking a race for innovation. This article focuses on the dual effects of AI on cybercrime and its implications for defense. Empowering cybercrime AI tools have significantly impacted cybercrime by diminishing the need for human involvement in aspects like malware development, scams, and extortion within cybercriminal organizations. This reduces recruitment demands and lowers operational costs. Although crime-related job postings usually appear on hidden online forums and channels in Darknet, ensuring anonymity, this practice holds risks, potentially exposing criminals to whistleblowers and law enforcement. In addition, AI provides cybercriminals a pathway to analyze large datasets, allowing them to identify vulnerabilities and high-value targets to launch more precise attacks with higher financial potential. Another area that can flourish with AI is the development of sophisticated phishing and social engineering attacks. This includes the creation of realistic deepfakes, deceptive websites, fraudulent social media profiles, and AI-powered scam bots. For instance, in 2020, AI-driven voice cloning attack impersonated a CEO, resulting in a $240,000 theft from a UK energy company. The utilization of AI is anticipated to also be prevalent among state-sponsored actors and criminal groups for disinformation campaigns. This involves creating and spreading deceptive content, including deepfakes, voice cloning, and developing disinformation bots. Evidence of cybercriminals using AI to manipulate social media during the COVID-19 pandemic already exists. AI’s role also extends to streamlining the development of adaptable, sophisticated malware. AI-powered malware employs techniques to avoid detection with advanced “self-metamorphic” mechanisms. Criminals could also exploit AI for the creation of AI-powered malware development kits. DeepLocker exemplifies AI-powered malware enhancing targeted attack and detection evasion by hiding within benign applications when not targeting specific victims. Counteracting cybercrime AI’s application for security will prominently be seen in threat detection and prevention, enhancing the accuracy and effectiveness of security. Conventional security tools relying on signatures and user input can struggle to detect sophisticated attacks. Consequently, an increasing number of vendors are turning to ML technologies to achieve effective threat detection. Enabling such tools to analyze large datasets for the identification of indicators of compromise, speeding up investigations, and revealing hidden patterns. Prominent examples include Cisco Secure Endpoint and Cisco Umbrella using ML to detect suspicious behavior. Another use for AI by defenders and law enforcement is the attribution of criminal activity to adversaries (even the ones leveraging tactics to evade identification by misleading attribution) through the analysis of multiple data points, including attack signatures, malware characteristics, and historical attack patterns. By examining these datasets, AI can identify patterns that aid experts in narrowing down the potential origin of an attack. Attribution is valuable as it provides insights into the motives and capabilities of the attackers. ML algorithms and AI are set to expand their use for automated analysis and the identification of threats. Through automated data analysis from multiple sources like threat intelligence feeds, dark web monitoring, and open-source intelligence, emerging threats can be identified and mitigated effectively. AI can also serve as a valuable tool for predictive analytics, enabling the anticipation of potential cyber threats and vulnerabilities based on historical data and patterns. Finally, AI can serve as a valuable contributor to cybersecurity training. It can offer students personalized learning paths based on their strengths and weaknesses, adapting exercises, simulated training environments, and material based on their performance and other metrics. Learn more about navigating the AI frontier. Related content brandpost Partnering up on XDR: A rising tide lifts all security teams Security practitioners must employ XDR tools to focus on the bigger picture and the larger threats at hand. By Pete Bartolik Sep 13, 2023 4 mins Security brandpost Insights from a CISO Survival Guide Cisco's CISO Survival Guide set out to uncover how modern enterprises should be secured given the uniquely evolving challenges of Identity management, data protection, software supply chain integrity, and ongoing cloud migration—all in the By Pete Bartolik Aug 24, 2023 4 mins Security brandpost Adapting tools & tactics to fight modern ransomware Many backup solutions rely on snapshots taken every 24 hours, but that leaves a substantial amount of data at risk in the event of an attack. That's where extended detection and response (XDR) comes in. By Pete Bartolik Aug 22, 2023 4 mins Security brandpost 2023 Ransomware Trends & Strategies While ransomware may recede from the headlines, the threat never goes away. By Pete Bartolik Aug 01, 2023 4 mins Security Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe