Meet Social Mapper, a facial recognition tool that searches for targets across numerous social networks, and the highly evasive and highly targeted AI-powered malware DeepLocker.Open-source Social Mapper face recognition toolThe open-source intelligence-gathering tool Social Mapper uses facial recognition to automatically search for targets across eight social media sites: Facebook, Twitter, LinkedIn, Instagram, Google+, the Russian social networking service VKontakte, and the Chinese social networking sites Weibo and Douban.Social Mapper was developed by Trustwave\u2019s Jacob Wilkins to help pen testers and red teamers with social engineering attacks. Instead of manually searching social media sites for name and pictures, Social Mapper makes it possible to automate such scans \u201con a mass scale with hundreds or thousands of individuals.\u201dThere are three stages to Social Mapper; the second stage is automatically searching social media sites for the targets online. Wilkins suggested letting that search run overnight as for \u201ctarget lists of 1,000 people, it can take more than 15 hours and use a large amount of bandwidth.\u201dAfter searching, it spits out a report such as a spreadsheet with links to targets\u2019 profile pages or an HTML report that also includes photos.From there, your attacks are limited \u201conly by your imagination,\u201d but Wilkins suggested several, such as creating fake social media profiles to "Friend" targets or tricking targets into disclosing phone numbers and email addresses with vouchers in order to \u201cpivot into phishing, vishing or smishing.\u201dYou can get Social Mapper on GitHub.AI-powered DeepLocker malware attacksIf everyday malware is not considered evasive enough, then think about weaponized artificial intelligence (AI) and then meet the new attack tool DeepLocker, which is powered by AI for \u201chighly targeted and evasive attacks.\u201dAI is being used to automatically detect and fight malware, but IBM Research decided to flip that and came up with a game changer \u2013 a \u201chighly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim.\u201d The researchers explained that DeepLocker \u201cunleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.\u201dYou can think of this capability as similar to a sniper attack, in contrast to the \u201cspray and pray\u201d approach of traditional malware. DeepLocker is designed to be stealthy. It flies under the radar, avoiding detection until the precise moment it recognizes a specific target. This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. But, unlike nation-state malware, it is feasible in the civilian and commercial realms.To show off DeepLocker\u2019s capabilities, the researchers camouflaged WannaCry ransomware in a video conferencing app. Going undetected by security tools, DeepLocker did not unlock and execute the ransomware until it recognized the face of the target.They added:Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target. When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim\u2019s face, which was the preprogrammed key to unlock it.The researchers said we need to prepare for AI-powered attacks and they intend to give a live demonstration of a proof-of-concept implementation at Black Hat USA.