HIGHLIGHTS
Table of Contents
ToggleGoogle’s Gemini 2.0 Flash AI model has recently come under fire due to its capacity for removing watermarks from images, sparking significant discussions around copyright issues and the integrity of digital content. Users have taken to social media platforms such as X (formerly Twitter) and Reddit to express their astonishment, claiming that Gemini 2.0 Flash effectively removes watermarks from images sourced from stock media websites.
According to a report by TechCrunch, the experimental image generation feature of Gemini 2.0 Flash has reportedly outperformed other existing AI tools in terms of its ability to accurately remove watermarks. It was noted that its text-to-image generation feature operates with fewer restrictions, enabling users to create images that showcase celebrities and copyrighted materials without adequate checks.
No way this lasts…
Google Gemini 2.0 Flash can remove watermarks
6 wild examples: pic.twitter.com/lyoRRESlHr
— Min Choi (@minchoi) March 17, 2025
RIP Watermarks
It’s only been 4 days since Google dropped Gemini Flash Experimental, and people are going crazy over its ability to remove image watermarks!
12 wild examples so far: (Don’t miss the 5th one) pic.twitter.com/sjERpFN4Vl
— Poonam Soni (@CodeByPoonam) March 17, 2025
New skill unlocked: Gemini 2 Flash model is really awesome at removing watermarks in images! pic.twitter.com/6QIk0FlfCv
— Deedy (@deedydas) March 15, 2025
Holy!!! Gemini Remove Watermarks in 5 sec pic.twitter.com/aiTWT7FBMf
— Jacques Gariepy (@JacquesGariepy) March 17, 2025
As of now, Google has not made any official statements regarding the criticism. However, the lack of stringent controls combined with the tool’s ability to erase watermarks without permission could lead to potential legal troubles for the company as copyright holders might pursue action against these practices.
It’s important to note that the capabilities of Gemini 2.0 Flash regarding image generation have been broadened for experimental uses and are now accessible via Google AI Studio. Despite this enhancement, Google has clarified that the feature is not intended for use in production environments.
In a related context, Google DeepMind launched SynthID Text in 2024, a tool designed for watermarking AI-generated text. Additionally, Google Photos plans to adopt SynthID to label images altered by AI, thereby enhancing the verification of their authenticity.
Furthermore, Google is part of the Coalition for Content Provenance and Authenticity (C2PA), an alliance that includes major players like Amazon, Microsoft, OpenAI, and Adobe. This coalition has developed technical standards aimed at tracking the provenance of AI-generated visuals by embedding metadata that records details like creation dates and the AI tools utilized.
Despite these constructive steps, the industry continues to grapple with significant hurdles related to the broad acceptance and effective integration of these standards across various platforms and applications.