April 12, 2024

An AI researcher tackles deepfakes of elections

For almost thirty years, Oren Etzioni was among the most optimistic researchers in the field of artificial intelligence.

But in 2019, Dr. Etzioni, a professor at the University of Washington and founder and CEO of the Allen Institute for AI, was one of the first researchers to warn that a new breed of AI would accelerate the spread of misinformation online. And by the middle of last year, he said, he was saddened that AI-generated deepfakes would deliver big elections. He founded a nonprofit, TrueMedia.org, in January in hopes of combating that threat.

On Tuesday, the organization released free tools for identifying digital disinformation, with a plan to put them in the hands of journalists, fact-checkers and anyone trying to figure out what’s really online.

The tools, available through the TrueMedia.org website to anyone approved by the nonprofit, are designed to detect fake and doctored images, audio and video. They review links to media files and quickly determine whether they can be trusted.

Dr. Etzioni sees these tools as an improvement over the patchwork defenses currently used to detect misleading or deceptive AI content. But in a year when billions of people around the world will vote in elections, he continues to paint a bleak picture of what lies ahead.

“I’m terrified,” he said. “There is a very good chance we will see a tsunami of disinformation.”

In the first few months of the year alone, AI technologies have helped create fake phone calls from President Biden, fake Taylor Swift images and audio ads, and an entirely fake interview that appeared to show a Ukrainian official took credit for a terrorist attack in Moscow. Detecting such disinformation is already difficult – and the tech industry continues to release increasingly powerful AI systems that will generate increasingly convincing deepfakes and make detection even more difficult.

Many artificial intelligence researchers warn that the threat is increasing. Last month, more than a thousand people signed – including Dr. Etzioni and several other prominent AI researchers – an open letter calling for laws that would hold the developers and distributors of AI audio and visual services liable if their technology was easily used to create harmful deepfakes.

At an event hosted by Columbia University on Thursday, Hillary Clinton, the former Secretary of State, interviewed Eric Schmidt, the former CEO of Google, who warned that videos, even fake videos, “shape voting behavior, human behavior, moods, everything.”

“I don’t think we’re ready,” Mr. Schmidt said. “This problem will become much worse in the coming years. Maybe or maybe not in November, but certainly in the next cycle.”

The technology industry is well aware of the threat. Even as companies rush to develop generative AI systems, they are trying to limit the damage these technologies can do. Anthropic, Google, Meta and OpenAI have all announced plans to restrict or label election-related uses of their artificial intelligence services. In February, 20 tech companies – including Amazon, Microsoft, TikTok and X – signed a voluntary pledge to prevent misleading AI content from interfering with voting.

That can be a challenge. Companies often release their technologies as “open source” software, meaning anyone can use and modify them without restrictions. Experts say the technology used to create deepfakes – the result of massive investments by many of the world’s largest companies – will always outperform the technology designed to detect disinformation.

Last week Dr. Etzioni during an interview with The New York Times showed how easy it is to create a deepfake. Using a service from a sister nonprofit, CivAI, which uses AI tools readily available on the Internet to demonstrate the dangers of these technologies, he immediately took photos of himself in jail – somewhere he has never been.

“When you see you’re being imitated, it’s extra scary,” he said.

He later generated a deepfake of himself in a hospital bed — the kind of image he said could sway elections if applied to Mr. Biden or former President Donald J. Trump just before the election.

A deepfake image created by Dr. Etzioni of himself in a hospital bed.Credit…via Oren Etzioni

TrueMedia’s tools are designed to detect these types of counterfeits. More than a dozen startups offer similar technology.

But Dr. Etzoini, while commenting on the effectiveness of his group’s instrument, said that no detector was perfect because they were driven by probabilities. Deepfake detection services have been fooled into mistaking images of kissing robots and giant Neanderthals for real photos, raising concerns that such tools could further damage society’s trust in facts and evidence.

When Dr. Etizoni gave TrueMedia’s tools a well-known deepfake of Mr. Trump sitting on a sidewalk with a group of young black men, they labeled it “highly suspicious”—their highest level of trust. When he uploaded another well-known deepfake of Trump with blood on his fingers, they weren’t sure if it was real or fake.

An AI deepfake of former President Donald J. Trump sitting on a sidewalk with a group of young black men was labeled “highly suspicious” by TrueMedia’s tool.
But a deepfake of Trump with blood on his fingers was labeled as ‘uncertain’.

“Even if you use the best tools, you can’t be sure,” he said.

The Federal Communications Commission recently banned AI-generated robocalls. Some companies, including OpenAI and Meta, are now labeling AI-generated images with watermarks. And researchers are exploring additional ways to separate the real from the fake.

The University of Maryland is developing a cryptographic system based on QR codes to authenticate unaltered live recordings. In a study published last month, dozens of adults were asked to breathe, swallow and think while talking so that their speech pause patterns could be compared to the rhythms of cloned audio.

But like many other experts, Dr. warns. Etzioni that image watermarks can be easily removed. And while he has dedicated his career to combating deepfakes, he acknowledges that detection tools will struggle to outpace the new generative AI technologies.

Since founding TrueMedia.org, OpenAI has unveiled two new technologies that promise to make his job even harder. You can imitate a person’s voice from a 15-second recording. Another can generate full-motion videos that look like something plucked from a Hollywood movie. OpenAI is not yet sharing these tools with the public as it tries to understand the potential dangers.

(The Times has sued OpenAI and its partner Microsoft over claims of copyright infringement related to artificial intelligence systems that generate text.)

Ultimately, said Dr. Etzioni, combating the problem will require broad cooperation between government regulators, the companies that create AI technologies, and the tech giants that control the web browsers and social media networks where disinformation is spread. However, he said the chances of this happening before the fall elections were slim.

“We try to give people the best technical assessment of what’s in store for them,” he said. “They have yet to decide if it is real.”

Leave a Reply

Your email address will not be published. Required fields are marked *