Vladimir Putin’s ‘deep fakes’ threaten U.S. elections
AUSTIN, Texas U.S. leaders say Vladimir Putin used a familiar cyber playbook to “muck around” in the midterm elections last month, but intelligence officials and key lawmakers believe a much more sinister, potentially devastating threat lies just down the road one that represents an attack on reality itself.
Policy insiders and senators of both parties believe the Russian president or other actors hostile to the U.S. will rely on “deep fakes” to throw the 2020 presidential election cycle into chaos, taking their campaign to influence American voters and destabilize society to a new level.
The eerie process, which relies on cutting-edge, deep-learning algorithm technology, produces high-quality audio and video of individuals saying things they never said or doing things they never did. The phony video, analysts say, will be virtually indistinguishable from real footage, mimicking voices, speaking patterns, facial expressions and surroundings to a frighteningly realistic degree.
U.S. officials attending a high-level national security forum last week say the technology is the next major threat to American elections and potentially to democracy itself, and analysts say the nefarious possibilities are virtually endless.
U.S. military researchers see deep fakes as a top priority, and lawmakers say cooperation among the armed forces, technology sector and Congress will be necessary to counter them.
“We are heading to an era where deep fakes technology is going to cause real chaos,” Sen. Ben Sasse, Nebraska Republican, told a room full of military, intelligence and national security officials Friday during the annual Texas National Security Forum.
“It’s going to destroy human lives, it’s going to roil financial markets, and it might well spur military conflicts around the world,” said Mr. Sasse, echoing what military and intelligence officials say privately. “When deep fakes technology produces audio or video of a global leader saying something or ordering some attack that didn’t happen. You’re going to have to actually have flesh-and-blood humans who have a little bit of a reservoir of public trust who can step to a camera together and say, ’I know that looked really real on your TV screen. But it wasn’t real.”
The use of deep fakes examples of which have been put together by researchers at American universities who say it’s vital to understand the technology before it is weaponized against the U.S. represents an escalation by Russia, officials say.
Moscow’s 2016 strategy to influence American elections centered on planting fake news stories and using hosts of bots to bombard Twitter, Facebook and other social media outlets with content. That approach continued this year, contributing to a broader worsening of the relationship between Washington and Moscow.
“There’s no doubt the relationship has worsened,” Defense Secretary James Mattis said over the weekend. He added that Mr. Putin tried “again to muck around in our elections just last month, and we are seeing a continued effort along those lines.”
“It’s continued efforts to try to subvert democratic processes that must be defended,” the defense secretary said.
Military officials say, the approach of Russia and potentially China will center on using the most advanced, difficult-to-detect technological tools to make Americans question what is real.
“Our adversaries don’t conduct information warfare as much as a war on information, undercutting legitimacy of all comers, including governments,” Gen. Raymond A. Thomas, head of U.S. Special Operations Command, told the national security conference Friday.
Deep fakes have come to the forefront only recently in the U.S. political realm, but they are already impacting American society. The technology, for example, has been used to produce phony sex videos of actress Gal Gadot, among others.
Deep fakes have been on the military’s radar for years. According to information obtained by the Canadian Broadcasting Corp., the U.S. Defense Advanced Research Projects Agency (DARPA) the Pentagon’s research arm spent at least $68 million over the past two years developing ways to identify and counter deep fakes.
News outlets also see deep fakes as a looming nightmare. Top editors at The Wall Street Journal said last month that they have launched an “internal deep fakes task force” to train reporters and editors how to spot computer-generated footage.
From a political perspective, analysts say, the phony footage is the natural conclusion of the strategy Russia used in 2016. That strategy could be much more effective in the 2020 cycle by adding deep fakes video footage to the mix.
What’s worse, Russia could provide an easy-to-follow template for anyone who wants to push a particular storyline or discredit an opponent, potentially poisoning the political process by calling reality into question.
“What comes next? We can expect to see deep fakes used in other abusive, individually targeted ways, such as undermining a rival’s relationship with fake evidence of an affair or an enemy’s career with fake evidence of a racist comment,” University of Texas law professor Robert Chesney and University of Maryland law professor Danielle Citron wrote in a recent post for Lawfareblog.com.
“Blackmailers might use fake videos to extract money or confidential information from individuals who have reason to believe that disproving the videos would be hard,” they wrote. “All of this will be awful. But there’s more to the problem than these individual harms. Deep fakes also have potential to cause harm on a much broader scale including harms that will impact national security and the very fabric of our democracy.”
The possibilities seem endless. Mr. Chesney and Ms. Citron cited some potential uses, including fabricated videos of officials taking bribes, making racist remarks or meeting with criminals; fake footage of soldiers slaughtering civilians in a war zone, thereby undermining public support for the conflict; or phony video of a nuclear strike, biological attack or natural disaster, creating panic among the public.
On a micro level, Sen. Mark R. Warner, Virginia Democrat, told an audience at the security forum that he believes Russia will “marry cyber and disinformation,” combining more traditional cyberhacking with deep fakes technology to create frighteningly personal attacks.
“They will go out and use cybertools to hack into an entity like an Equifax company that has troves of personal information on lots of Americans, contact us with personal information that makes you think, ‘Oh, my gosh, I should open this because they know my mom’s maiden name or know my Social Security number,’” said Mr. Warner, vice chairman of the Senate Select Committee on Intelligence.
He spoke alongside Sen. Richard Burr, North Carolina Republican and committee chairman, and used his colleague as an example.
Upon opening the email, Mr. Warner said, “You’ll see deep fakes technology where you’ll see a digital image of Richard Burr’s voice and his face communicating with you. But that won’t be Richard at all.”
Some lawmakers seem keen to find a legislative solution, such as a law requiring official videos of prominent individuals to be labeled in some way.
But there is skepticism that any of those Washington-based solutions would work, given the speed of technology and remarkable ability of hackers and cybercriminals to develop more innovative methods.