Google Gemini AI Faces Criticism for Allegedly Removing Image Watermarks, Potential Legal Consequences for Creators

Shape1 Shape2
Google Gemini AI Faces Criticism for Allegedly Removing Image Watermarks, Potential Legal Consequences for Creators



Gemini 6

HIGHLIGHTS

Gemini 2.0 Flash’s image generation feature is known for its proficiency in watermark removal.

The limited safeguards of the tool could facilitate unauthorized creation of copyrighted works.

Google Photos is set to implement SynthID to label images modified by AI for verification purposes.

Google’s Gemini 2.0 Flash AI model has recently come under fire due to its capacity for removing watermarks from images, sparking significant discussions around copyright issues and the integrity of digital content. Users have taken to social media platforms such as X (formerly Twitter) and Reddit to express their astonishment, claiming that Gemini 2.0 Flash effectively removes watermarks from images sourced from stock media websites.

According to a report by TechCrunch, the experimental image generation feature of Gemini 2.0 Flash has reportedly outperformed other existing AI tools in terms of its ability to accurately remove watermarks. It was noted that its text-to-image generation feature operates with fewer restrictions, enabling users to create images that showcase celebrities and copyrighted materials without adequate checks.

As of now, Google has not made any official statements regarding the criticism. However, the lack of stringent controls combined with the tool’s ability to erase watermarks without permission could lead to potential legal troubles for the company as copyright holders might pursue action against these practices.

It’s important to note that the capabilities of Gemini 2.0 Flash regarding image generation have been broadened for experimental uses and are now accessible via Google AI Studio. Despite this enhancement, Google has clarified that the feature is not intended for use in production environments.

In a related context, Google DeepMind launched SynthID Text in 2024, a tool designed for watermarking AI-generated text. Additionally, Google Photos plans to adopt SynthID to label images altered by AI, thereby enhancing the verification of their authenticity.

Furthermore, Google is part of the Coalition for Content Provenance and Authenticity (C2PA), an alliance that includes major players like Amazon, Microsoft, OpenAI, and Adobe. This coalition has developed technical standards aimed at tracking the provenance of AI-generated visuals by embedding metadata that records details like creation dates and the AI tools utilized.

Despite these constructive steps, the industry continues to grapple with significant hurdles related to the broad acceptance and effective integration of these standards across various platforms and applications.

Leave a Reply

Your email address will not be published. Required fields are marked *