• United States



Senior Writer

Doctored Jim Acosta video shows why fakes don’t need to be deep to be dangerous

News Analysis
Nov 08, 20183 mins
SecuritySocial Engineering

White House promotion of an allegedly doctored press conference video shows how "shallow fakes" can manipulate opinion.

deepfakes fake news unreal doctored malicious personal attack video audio
Credit: Getty Images

After much hullabaloo earlier this year about “deep fakes,” the machine-learning based fake videos that Senator Marco Rubio called the modern equivalent of nuclear weapons, it turns out that low-tech doctored videos can be just as effective a form of disinformation, as a fake video promoted by the White House this week demonstrates—an attack that could just as easily be deployed against you or your enterprise.

After a tense confrontation between President Trump and CNN reporter Jim Acosta at a press conference, during which a female White House intern attempts to take the microphone from Acosta, White House press secretary Sarah Sanders shared a doctored video of the incident that has been modified to look like Acosta attacked the intern.

Watching both videos makes it clear that the original has been edited. The doctored video was first published on Twitter by Infowars editor Paul Joseph Watson, and has been modified to make it look like Acosta “karate-chopped” the intern. The video also removes clear audio of the reporter saying “Pardon me, ma’am.”

It remains unclear whether Watson did more than remove the audio and zoom in on Acosta’s arm. Buzzfeed News reports that the video-to-gif conversion process may be responsible for the jerkiness of the video. Gif uses many fewer frames per second than regular mp4 video. “Digitally it’s gonna look a tiny bit different after processing and zooming in, but I did not in any way deliberately ‘speed up’ or ‘distort’ the video,” Watson told Buzzfeed. “That’s just horse shit.”

The incident underscores the fears that video can be easily manipulated to discredit a target of the attacker’s choice—a reporter, a politician, a business, a brand. Unlike so-called “deep fakes,” however, where machine learning puts words in people’s mouths, low-tech doctored video hews close enough to reality that it blurs the line between the true and false.

FUD (fear, uncertainty and doubt) is familiar to folks working in the security trenches, and deploying that FUD as a weapon at scale can severely damage a business as well as an individual. Defending against FUD attacks is very difficult. Once the doubt has been sowed that Acosta manhandled a female White House intern, a non-trivial portion of viewers will never forget that detail and suspect it might be true.

Reputational damage for an enterprise can cause stock prices to plummet, and result in long-term consequences for customers and shareholders who may no longer trust the truth about your business when they hear it going forward.

This is the real danger of FUD, of course—when no one can be sure any longer what is true and what is false, attackers who wish to manipulate citizens, consumers, or shareholders at scale can do so with ease. Defending against such attacks is extremely difficult, because even an individual or enterprise that has done nothing wrong, such as CNN’s Acosta, still faces a tarnished reputation as a result.

Senior Writer

J.M. Porup got his start in security working as a Linux sysadmin in 2002. Since then he's covered national security and information security for a variety of publications, and now calls CSO Online home. He previously reported from Colombia for four years, where he wrote travel guidebooks to Latin America, and speaks Spanish fluently with a hilarious gringo-Colombian accent. He holds a Masters degree in Information and Cybersecurity (MICS) from UC Berkeley.

More from this author