top of page

The Deepfakes Are Winning: Researchers Prove AI Watermarks Are Breakable

  • Writer: SciCan
    SciCan
  • Jul 24
  • 3 min read

Updated: Aug 2

Despite best efforts, AI watermarking solutions are still no match for deepfakes.


ree

The Race for AI Safety


As GenAI videos emerge from the uncanny valley, researchers are working to develop clever methods for flagging AI-generated content. Meanwhile, AI's capabilities are rapidly expanding, making the mitigation of that deepfake content all the more crucial.


This is high-stakes work.


The implications of deepfakes range from fraud and online harassment to the dissemination of political misinformation. New research suggests that one highly touted solution, AI watermarking, may not provide the level of protection that was promised.



Exposing AI Watermark Weaknesses


Researchers at the University of Waterloo’s Cybersecurity and Privacy Institute developed an 'UnMarker' tool that erases invisible AI watermarks, even without knowing that watermarks are there.


Their breakthrough means that, regardless of how carefully encoded, today’s watermarks can be systematically removed, reducing our ability to identify harmful content.



“People want a way to verify what’s real and what’s not because the damages will be huge if we can’t. From political smear campaigns to non-consensual pornography, this technology could have terrible and wide-reaching consequences.”

— Andre Kassis, PhD Candidate, Computer Science


ree

Why It Matters: Deepfakes Ruin Lives & Disrupt Society


The dangers of deepfakes go far beyond meme-worthy celebrity images. Harassment via deepfakes can destroy lives. And scams could cost billions.


In 2023 alone, global deepfake-related scams cost businesses an estimated $250 million, and North America experienced a 1740% increase in deepfake fraud incidents compared to the previous year.


The problems don’t stop there.


Fake videos and images can eventually erode public trust in elections, courts, journalism, and even personal relationships.


A recent KPMG survey found that 83% of Canadians are concerned about the spread of misinformation, with many expressing doubts about their ability to distinguish between real and fake content.


For businesses, that worry is also growing. In 2024, KPMG found that 91% of Canadian business leaders were worried that bad actors would use deepfakes to run misinformation/disinformation campaigns.


ree

Watermarking: A Single Solution With Glaring Limits


Leading tech companies have pointed to digital watermarks as a valuable tool to combat deepfakes.


Watermarks are subtle or invisible signatures embedded in AI-generated videos and imagery. They help identify synthetic content, regardless of cropping or editing, and can be detected with the right tools.


However, as the research shows, these digital fingerprints are not as powerful as we've been told.


Using statistical analysis of image patterns, the researcher's 'UnMarker' tool successfully stripped watermarks from leading models like Google’s SynthID and Meta’s Stable Signature in more than half of the attempts.



If we can figure this out, so can malicious actors. Watermarking is being promoted as this perfect solution, but we’ve shown that this technology is breakable. Deepfakes are still a huge threat."

— Andre Kassis, PhD Candidate, Computer Science



Global Perspectives: Deepfake Detection & Discussion


As one of the first countries to implement a national AI strategy, Canada has positioned itself as a leader in ethical AI research and policy. Canada's proposed Artificial Intelligence and Data Act (AIDA) provides a significant step forward, aiming to regulate AI systems and protect citizens. However, as deepfake capabilities continue to expand, so too should our safeguards.


Globally, efforts to combat deepfakes are also gaining momentum. For instance, the European Union’s AI Act includes new transparency requirements for AI-generated content. However, international coordination on standards tools has yet to take hold.


“While watermarking schemes are typically kept secret by AI companies, they must satisfy two essential properties: they need to be invisible to human users to preserve image quality, and... resistant to manipulation of an image like cropping or reducing resolution.”

— Dr. Urs Hengartner



The Way Forward: Guardrails, Awareness, and Public Trust


As the race between AI scientists and bad actors intensifies, research teams share similar messages, calling for coordinated, holistic, and human-first approaches. A suite of multi-layered solutions could provide an answer:


  • Standardized Tools: Because technical fixes can be bypassed, researchers are calling for coordinated, transparent standards and independent verification.

  • Education: As many as 78% of Canadians say they want more education on how to spot and report deepfakes, a trend that's echoed globally.

  • Legal Safeguards: Legal frameworks are also racing to keep pace with technological progress, so global cooperation and enforcement may be key here as well.


🍁 Subscribe for weekly updates from Science Canada 


Comments


bottom of page