February 22, 2024

Can watermarks save us from deepfakes?

A video of Elizabeth Warren saying Republicans shouldn’t vote went viral in 2023. But it wasn’t Warren. That Ron DeSantis video wasn’t the governor of Florida either. And no, Pope Francis did not wear a white Balenciaga coat.

Generative AI has made it easier to create deepfakes and spread them across the internet. One of the most common proposed solutions involves the idea of ​​a watermark that would identify AI-generated content. The Biden administration has focused heavily on watermarking as a policy solution, even specifically requiring tech companies to find ways to identify AI-generated content. The president’s executive order on AI, released in November, was based on commitments from AI developers to find a way to tag content while generating AI. And it doesn’t just come from the White House; Lawmakers are also considering enshrining watermarking requirements as law.

Watermarks cannot be a panacea. For starters, most systems simply don’t have the capacity to tag text in the way they can tag visual media. Yet people are so familiar with watermarks that the idea of ​​watermarking an AI-generated image feels natural.

Almost everyone has seen an image with a watermark. Getty Images, which distributes licensed photos taken at events, uses a watermark so ubiquitous and so recognizable that it’s its own meta-meme. (In fact, the watermark is now the basis of Getty’s lawsuit against AI generation platform Midjourney, with Getty claiming that Midjourney must have copied its copyrighted content because it generates the Getty watermark in its output.) Naturally, artists were already signing their works long before digital media or even the rise of photography, to let people know who created the painting. But the watermarking itself – according to A history of graphic design – began during the Middle Ages, when monks changed the thickness of the printing paper while it was wet and added their own mark. Digital watermarking increased in the 1990s as digital content became more popular. Companies and governments began placing tags (hidden or otherwise) to make it easier to track ownership, copyright, and authenticity.

Watermarks, as before, will still indicate who owns and created the media people are watching. But as a policy solution to the problem of deepfakes, this new wave of watermarking would essentially label content as AI or human-generated. Adequate tagging by AI developers would also theoretically prove the provenance of AI-generated content, thereby also answering the question of whether copyrighted material was used in its creation.

Tech companies have adopted the Biden directive and are slowly releasing their AI watermarking solutions. Watermarking may seem simple, but it has one major weakness: a watermark pasted over an image or video can be easily removed via photo or video editing. The challenge then becomes to create a watermark that Photoshop cannot erase.

The challenge then becomes to create a watermark that Photoshop cannot erase.

Companies like Adobe and Microsoft — members of the industry group Coalition for Content Provenance and Authenticity, or C2PA — have adopted Content Credentials, a standard that adds provenance features to images and videos. Adobe has created a content reference symbol that will be embedded in the media; Microsoft also has its own version. Content Credentials embeds certain metadata (such as who created the image and what program was used to create it) into the media; Ideally, people can click or tap the symbol to view that metadata for themselves. (Whether this symbol can consistently survive photo editing remains to be proven.)

Meanwhile, Google has said it is currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool. Digimarc, a software company that specializes in digital watermarking, also has its own AI watermarking feature; it adds a machine-readable symbol to an image that stores copyright and ownership information in the metadata.

All of these attempts at watermarking are intended to make the watermark invisible to the human eye, or to offload the hard work to machine-readable metadata. It’s no wonder: this approach is the most surefire way to store information without deleting it, and encourages people to take a closer look at the origin of the image.

That’s all well and good if you’re trying to build a copyright detection system, but what does that mean for deepfakes, where the problem is fooling fallible human eyes? Watermarks place the burden on the consumer and rely on the individual’s sense that something is wrong for information. But people generally don’t make a habit of checking the provenance of everything they see online. Even if a deepfake is tagged with meaningful metadata, people will still fall for it. We have seen countless times that when information is fact-checked online, many people still refuse to believe the fact-checked information.

Experts believe that a content tag is not enough to prevent misinformation from reaching consumers, so why would watermarking work against deepfakes?

The best thing you can say about watermarking, it seems, is that at least it’s a thing at all. And because of the sheer volume of AI-generated content that can be produced quickly and easily, a little bit of friction can go a long way.

After all, there is nothing wrong with the basic idea of ​​watermarking. Visible watermarks indicate authenticity and may lead people to be more skeptical of media without watermarks. And if a viewer is curious about authenticity, watermarks provide that information directly.

The best thing you can say about watermarking, it seems, is that at least it’s a thing at all.

Watermarking cannot be a perfect solution for the reasons I mentioned (and furthermore, researchers have managed to break many of the existing watermarking systems). But it coincides with a growing wave of skepticism about what people see online. I must confess that when I started writing this, I believed that it is easy to fool people into believing that really good DALL-E 3 or Midjourney photos are made by humans. However, I realized that the discourse around AI art and deepfakes has seeped into the consciousness of many people who are chronically online. Instead of accepting magazine covers or Instagram posts as authentic, there is now an undercurrent of doubt. Social media users regularly research and name brands when using AI. Look how quickly internet sleuths conjured up the opening credits of Secret invasion and the AI-generated posters Real detective.

It’s still not a great strategy to rely on someone’s skepticism, curiosity, or willingness to find out if something is AI-generated. Watermarks can be good, but there has to be something better. People are more dubious about the content, but we’re not quite there yet. One day we might find a solution that indicates something was created by AI, without hoping the viewer wants to find out if it is.

For now, it’s best to learn to recognize if a video isn’t really of a politician.

Leave a Reply

Your email address will not be published. Required fields are marked *