Americas

  • United States

Asia

Oceania

michelledrolet
Contributor

Why we need to be worried about deepfake videos

Opinion
Jun 25, 20185 mins
CybercrimeIdentity Management SolutionsPrivacy

The potential security threat of realistic fake videos of people doing and saying things they never did is cause for concern. Learn why and how they were developed, the risk they might represent when they spill beyond the world of celebrity and what we can do about it.

1 fake profile
Credit: Thinkstock

In the digital age it has never been easier to spread false information. We are increasingly aware of the potentially dangerous influence of fake news, but what if the disinformation went beyond the written word. We tend to trust things we see with our own eyes, but the rise of so-called deepfakes – extremely realistic false videos of people doing and saying things they never actually did – should be a real cause for concern.

A realistic video of a CEO, a soldier, or a head of state engaged in compromising activity, or even just saying something inflammatory, could tank stock prices, provoke serious civil unrest, or cause a deterioration in international relations. Even once the falsehood is exposed, there’s a likelihood of lasting reputational damage.

The rise of deepfakes

The technology first surfaced on Reddit, when a user called “deepfakes” posted a series of videos featuring movie stars and other female celebrities engaged in pornographic actions. The technology used to create the videos was released as a free app, called FakeApp, which quickly led to creation of many more fake porn videos that looked uncannily realistic.

Deepfake videos are created by employing deep learning artificial intelligence that scans a multitude of videos and photos of the person to be faked and then superimposes their face onto someone in an existing video. It takes a lot of processing power and a few hours, but anyone with a reasonably powerful computer can create an authentic-looking video if they have enough material to draw on.

This kind of software has already been used to create realistic videos of Barack Obama, John F. Kennedy, Donald Trump, German Chancellor Angela Merkel, and Russian President Vladimir Putin, among many others.

The combination of neural networks that can scan thousands of images and videos and the increase in the availability of computing power puts this kind of video creation well within reach of cybercriminals.

Going viral

Social media is a wonderful tool, but it’s a double-edge sword. Around 70 percent of the U.S. population uses social media now, according to Statista. The most popular service, Facebook, boasts 2.2 billion monthly active users.

Once a fake video has been created it can be spread very quickly. There has been a lot of interesting research on how rumors spread on social media, such as this report from the Max Planck Institute in Germany. Researchers at the University of Warwick also found that unverified rumors were more widely spread because they hold more interest for people and, while a true rumor is typically confirmed within two hours, a false rumor takes more than 14 hours to be exposed on average.

They were looking at simple tweets and conversation threads that suggested falsehoods but imagine if those rumors are accompanied by what appears to be genuine video footage.

Another route for malware

We may be learning collectively to take what we read, hear, and even see with a pinch of salt, but the risk of deepfakes isn’t just about the spread of lies. Viral videos can be powerful tools for cybercriminals intent on delivering a malware payload. They are commonly employed in phishing attacks to persuade people to click on a link or download a file they really shouldn’t.

Combining that viral spread and the excitement that’s generated by a popular video circulating with the insertion of malware into images, links, and video files enables cybercriminals to get a foothold on systems. Once they’re onto your network they can find ways to move laterally and exfiltrate data. It’s vital to keep tabs on your employees and put proper security awareness training in place.

The future of fake videos

The technology is improving all the time, so we can expect to see more fake videos in the coming months and years. There’s a great deal of risk that unscrupulous criminals will use deepfake videos to blackmail people. Just as with the alarming rise of ransomware, there’s a good chance that many victims will see paying up as the less damaging course of action. But fake videos may also be used as leverage to gain access to sensitive company data and protected networks.

In the public arena we may see fake videos being used to claim political corruption and get people removed from office; they might be used to bring medical malpractice suits, or to highlight false police brutality – the possibilities are endless.

What can we do about it?

Since deep learning requires source material for the artificial intelligence to analyze, it may become increasingly important to safeguard images and videos of ourselves, but that’s going to be impossible for anyone in the public eye to do. The metadata on a video, showing when and where it was recorded is harder to fake and we may turn to new technologies to verify that videos are genuine and unaltered.

Perhaps one of the best strategies we can employ here is to fight fire with fire by using deep learning of our own and training it to recognize fake videos. We’ve already seen the potential of machine learning for our cyber defenses, this is another front where it could be usefully employed.

Now the worry is that deepfake technology may develop faster than our ability to detect it. We need to act to ensure that this pernicious threat is shut down, because the alternative is to stop trusting our own eyes.

michelledrolet
Contributor

Michelle Drolet is a seasoned security expert with 26 years of experience providing organizations with IT security technology services. Prior to founding Towerwall (formerly Conqwest) in 1993, she founded CDG Technologies, growing the IT consulting business from two to 17 employees in its first year. She then sold it to a public company and remained on board. Discouraged by the direction the parent company was taking, she decided to buy back her company. She re-launched the Framingham-based company as Towerwall. Her clients include Biogen Idec, Middlesex Savings Bank, PerkinElmer, Raytheon, Smith & Wesson, Covenant Healthcare and many mid-size organizations.

A community activist, she has received citations from State Senators Karen Spilka and David Magnani for her community service. Twice she has received a Cyber Citizenship award for community support and participation. She's also involved with the School-to-Career program, an intern and externship program, the Women’s Independent Network, Young Women and Minorities in Science and Technology, and Athena, a girl’s mentorship program.

Michelle is the founder of the Information Security Summit at Mass Bay Community College. Her numerous articles have appeared in Network World, Cloud Computing, Worcester Business Journal, SC Magazine, InfoSecurity, Wired.com, Web Security Journal and others.

The opinions expressed in this blog are those of Michelle Drolet and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.