Look, I get it. I understand why watermarks exist for AI-generated graphics. People are creating malicious content, deepfakes, and spreading misinformation. There’s a growing need to identify what’s AI-created versus human-created, especially when content is created and shared with little to no human input.
What nobody is addressing is that AI companies are encouraging people and businesses to upload sensitive documents, copyrighted materials, and proprietary content to their systems. If the AI does anything with those documents, even something as simple as checking grammar, the company embeds an invisible watermark in it.
Wait, what?
When Your Words Become “AI Output”
When you write something, it’s your work with your words. You ask Google’s Gemini to check it for grammar. It finds no errors, or maybe it changes a comma to a semicolon. That’s it.
During this process, Google puts a watermark in your document. The same content the AI company doesn’t own. You can’t copyright AI-generated content, but YOUR original words are automatically copyrighted the minute you write them down. Yet, because an AI just passed its eyes over them (so to speak), now your work is tagged as if it weren’t written by a human.
Does anyone want watermarks embedded in their works without permission?
What Are These Watermarks Doing?
Google uses something called SynthID to embed imperceptible digital watermarks into content generated by Gemini. These watermarks are invisible to humans but can be detected by machines. The technology works across images, video, audio, and text.
The goal is to confirm if something was created by a human or an AI. These watermarks are designed to survive common edits like compression, cropping, format conversion, and even light rewriting. Platforms and regulators can later identify that content as having passed through an AI model.
OpenAI had a system for watermarking text, though it hasn’t been released due to complaints from users. Microsoft added a built-in digital watermarking feature in Azure AI. Adobe created a symbol that acts as a watermark for AI-generated images.
Google is the most aggressive with its SynthID system. covers text, images, audio, and video made with Gemini even if you try to edit them. So far, there’s no evidence of Claude (Anthropic) and Perplexity using invisible watermarking on their text outputs.
Watermarks Doesn’t Solve the Real Problem with AI
The idea of putting a watermark on AI-generated content is great until you remember some people have self-hosted AIs on their computers. Locally run LLMs don’t have watermarking enabled by default. They can also be programmed to remove watermarks on AI-generated text and images.
Analysts have warned that watermark-removal tools will only get better overtime. It doesn’t help that different systems use different methods (like C2PA), and some watermarks can be easily removed. So what exactly are we accomplishing here?
Another problem is that human-made content is getting watermarked. AI is being added to just about everything now. What if a college student writes an essay entirely on their own but uses AI to check for grammar mistakes? If the AI adds a watermark to the essay during its grammar check, the college student could get expelled. Many universities view the use of AI writing tools as a form of academic dishonesty.
We’re adding watermarking to content made by real humans while bad actors use removal tools to evade detection.
You Can’t Give Permission, Either
When you give AI access to your sensitive documents or your writing, you didn’t give it permission to modify it with hidden tracking data.
Think about what this means for:
- Academic writers whose work must demonstrate individual skill
- Researchers submitting grant proposals
- Lawyers drafting legal briefs
- Anyone in an environment where AI use is restricted or unclear
Many institutions haven’t clarified what counts as “assistance from an AI.” A hidden tag suggesting work is AI-generated or AI-assisted can become a serious problem when the work is supposed to represent your own judgment and expertise.
What Would Be a Better Solution Instead?
It would be better if there was an accurate way to distinguish something that is AI-generated from works that had little to no assistance from AI. The signal should be more transparent by detailing how AI was used, instead of simply stating “AI detected.”
Using a model as a proofreader isn’t the same as having it write an entire document or essay for us.
We also need to question why we encourage companies to embed hidden data into our text and graphics, specifically copyrighted content they don’t own?
Most users either don’t know about watermarking or they don’t understand its implications. Why assume that people would consent to this modification of their work?
How to Protect Yourself
Until AI detection tools catch up, make sure to keep a copy of your original work. Store drafts or use version history to prove it’s your original work when needed (school, work, legal contexts).
Limit the use of AI for brainstorming ideas or generating placeholder text. Then manually edit or rewrite your ideas rather than pasting the AI-generated text into your document.
If you can use open-source AI that can run locally on your device. Using local LLMs to generate code or check for grammar mistakes is safer in the long run.
Treat cloud-AI-touched versions of your work has been watermarked. If you must use a cloud-based model on your work, do a serious line-by-line rewrite afterward.
Avoid AI models that explicitly advertise text watermarking. Google’s Gemini with its SynthID is the obvious example.
Watermarking everything that is “touched” by an AI doesn’t solve the problems it claims to address. Bad actors will use local models or watermark-removal tools. Meanwhile, everyone else gets their content permanently tagged without their knowledge or permission.We need better solutions. Stronger laws targeting malicious content, more nuanced detection tools. More importantly, we need to respect the intellectual property rights of the humans creating the content in the first place.