How far is too far when defending intellectual property?
OpenAI’s latest privacy change should have been good news. The company no longer has to save deleted chats for most users. A relief after months of court-mandated data preservation. But… the celebration stops there. Hidden in the background of this victory is a privacy nightmare that most ChatGPT users have no idea they’re part of.
The reason? The New York Times.
The Times sued OpenAI and Microsoft in late 2023, claiming that millions of its articles were used without permission to train AI models. On the surface, this case is about copyright protection and the survival of journalism in an AI-driven world. In its pursuit of justice, and profits, the Times has taken a detour straight through the privacy of millions of ordinary people.
What the lawsuit is really about
The NYT accuses OpenAI of using its journalism without consent. That much is true. ChatGPT was trained on vast amounts of publicly available data, which likely included paywalled or copyrighted material. The Times argues this practice damages its business because ChatGPT can now reproduce its reporting without a subscription or credit.
That claim deserves to be heard. Copyright law hasn’t caught up with AI, and media outlets are right to ask how their work is being used. The Times’ approach to proving it crosses a line.
In May 2025, a federal court ordered OpenAI to preserve every ChatGPT conversation. Yes, including deleted ones, for evidence. The order applied to Free, Plus, Pro, and Team accounts alike. Even users who never mentioned the Times had their data quietly flagged and stored under a “legal hold.” Those conversations, deleted or not, were now potential exhibits in a courtroom battle they never agreed to join.
OpenAI complied because it had to. The court left them no choice.
The hypocrisy hiding in plain sight
Here’s the part that doesn’t sit right. The Times says it wants to protect its content from being used without permission. Fair enough. It’s doing so by demanding access to private data from users who had nothing to do with its lawsuit.
If OpenAI profiting from NYT’s journalism is exploitation, then what do you call NYT’s request for millions of private user chats? It’s the same logic turned inside out. Using someone else’s words, this time for financial leverage.
According to internal summaries, the Times originally requested access to a far larger set of user data than the court ultimately allowed. OpenAI ended up turning over roughly 10 million chats, still an enormous amount of private material, but less than what the Times wanted. None of the affected users were notified. Their deleted conversations were simply preserved, waiting in legal limbo for lawyers to decide what happens next.
That’s the real problem.
Privacy isn’t a conditional right
Deleted chats aren’t really deleted if they can be resurrected in court. Most people assume that when they hit “delete,” their words are gone. That expectation of privacy is fundamental to trust in any digital platform.
By sweeping ordinary users into a corporate tug-of-war, the system has failed that trust. The people whose chats are under legal hold don’t even know it. They can’t opt out. They can’t request deletion. They can’t do anything except continue using ChatGPT, unaware that some of their most personal or creative conversations might still be sitting on a server marked “evidence.”
OpenAI says those retained logs are secure and accessible only through legal channels. Maybe so. That doesn’t erase the ethical problem. It only hides it behind technical language.
The Times could have built its case using web-server logs showing which bots scraped its site. If OpenAI really used their content, there should be traces in those records. That’s how most digital copyright cases are handled: prove unauthorized access, not violate unrelated users’ privacy to go fishing for evidence.
A two-sided contradiction
Let’s be clear: neither side walks away looking virtuous.
OpenAI built models that used publicly available (and probably copyrighted) material without asking first. The Times responded by prying into user data to prove it. Both moves undermine trust, just in different directions.
This isn’t just a fight about data. It’s a fight about who gets to decide what privacy means. The Times believes protecting journalism justifies extreme discovery measures. OpenAI believes protecting its technology requires retaining data. Meanwhile, the public, whose conversations are trapped in the middle, has no say at all.
The bigger picture
This case is a glimpse into the future of every AI-driven industry. When technology evolves faster than law, the casualties are often invisible. We talk about ethics, copyright, and transparency as if they exist in neat categories, but they don’t. They overlap. They clash.
The NYT vs. OpenAI case isn’t just about journalism or AI. It’s about the price we’re willing to pay for progress. If protecting one person’s intellectual property requires invading another’s privacy, we haven’t advanced. We’ve just traded one violation for another.
There’s a lesson here for every tech company and publisher: the right to protect your work ends where another person’s privacy begins.
If you were feeling sorry for the New York Times, think again. This lawsuit may have started as a defense of journalism. Now a warning about power. How easily the powerful justify invading privacy when money is at stake.
OpenAI deserves scrutiny for how it trained its models, but users deserve protection too. Deleting a chat should mean it’s gone. Period.
In the race to control the future of AI, both sides have forgotten something simple: privacy is not a privilege; it’s a boundary. When boundaries disappear, so does trust.