It’s long overdue for laws to catch up with technology. But AI hasn’t just outpaced legislation. It’s exposing the deep cracks in how we even think about copyright, creativity, and consent in a machine-learning era.
Two recent cases show where those cracks are starting to split wide open. The Anthropic lawsuit focuses on training language models with copyrighted text. The Disney case dives headfirst into the more visually explosive problem: what happens when AIs are trained on copyrighted images and can reproduce characters like Mickey Mouse or Yoda on demand.
The big question both cases raise is the same:
When does “learning” cross the line into infringement and who decides?
The Anthropic Ruling: A Judge Can’t Change the Law Even If It’s Outdated
Let’s start with the Anthropic case. A federal judge ruled that using legally acquired books to train AI falls under fair use. Pirated books? Not so much.
This ruling makes sense under current law. AI models don’t typically reproduce entire books. They use them to “learn” language patterns, not copy and paste pages. That’s where Judge Alsup drew the line: if you legally bought the books, using them to teach an AI how to generate new language is considered transformative, a key fair use principle.
That distinction matters. Anthropic had used pirated books during training, arguing that the exciting innovation justified the means. The judge pushed back: piracy is not excused by a cool final product. Legally acquired books? Likely fine. Pirated ones? Still theft, no matter the outcome.
So yes, Anthropic should have just bought the books. But more importantly, this case draws a clear legal boundary for text-based models: fair use covers training, not copying, and legality starts with how you got the data.
The Disney Lawsuit: AI Art Is a Copyright Minefield
The Disney/Universal lawsuit against Midjourney is a much messier problem.
Unlike with books, image-based AIs can and do output visual content that closely resembles copyrighted characters. The whole purpose of training on art is to understand visual style and form. That training can also lead to near-identical copies of protected works. And when it comes to iconic IPs like Darth Vader, Grogu, or Minions, even inspiration can look like infringement.
It’s not just about what the model was trained on. It’s about what it can spit back out.
And here’s the paradox: To avoid infringing on Mickey Mouse, the AI must know what Mickey Mouse looks like. But to know that, it has to be trained on images of Mickey Mouse.
That’s the trap.
Now imagine the scale of the problem: every game character, every animated movie, every brand mascot. It’s impossible for a company to manually block them all. That’s why Disney suing Midjourney makes sense. They’re saying: we shouldn’t have to chase your model’s outputs. You shouldn’t have trained on our IP to begin with.
The “Grogu Problem”: How Close Is Too Close?
Let’s say I ask Midjourney to draw Grogu (Baby Yoda) in a rainbow pantsuit, wearing a clover-stuffed hat, riding a menacing purple dragon. That’s obviously not a scene from The Mandalorian. Is it transformative enough to be legal?
Maybe. But if the end result looks like Grogu, then the AI has reproduced a copyrighted character, even if the context is absurd. One could argue it doesn’t compete with Disney because such an image would never be sold by them. But that’s not how copyright works. The test isn’t just economic harm. It’s also recognizability.
If you can trick the AI by saying “a small green creature with pointy ears,” and it still outputs Grogu? Now we’re in deeper water.
Worse, think of all the prompt engineering people would do trying to get Grogu without saying Grogu. The resources wasted, not just in prompt attempts, but in legal teams scrambling to decide if a generation crossed the line, is absurd.
This isn’t just bad IP control. It’s a content moderation nightmare on autopilot.
We’ve Reached the Point Where Confusion Is the Business Model
The frustrating part is that none of this is new. The potential for abuse was obvious from the beginning. AI companies knew this would happen and trained on copyrighted works anyway, betting that legal ambiguity would protect them long enough to dominate the market.
In many ways, they were right. The law is still unsettled. Fair use is a defense, not a permission slip. No one knows exactly where the line is between “inspired by” and “infringes on.”
As courts start handing down rulings, like Alsup’s in the Anthropic case, those lines are finally being drawn. The more AI companies rely on scraping copyrighted material without permission, the more likely they are to end up like Midjourney: sued by giants who have the money and patience to fight it out.
The Real Issue Isn’t AI. It’s Accountability
The takeaway here isn’t that AI is evil or that training on copyrighted works should never happen. It’s that we built a legal system assuming humans would make the creative decisions. Now that machines are involved, the safeguards have to evolve too.
We’re facing a future where AIs are expected to know copyrighted content, but not repeat it. To learn styles, but not replicate them. To transform, but not too much. And right now, there’s no reliable way to do that.
The only thing worse than not knowing what’s legal is assuming everything is.
The Anthropic ruling gives the AI industry a partial roadmap: if you want to train on copyrighted material, acquire it legally and make sure your outputs don’t copy. But the Disney case shows just how hard that is in practice. Especially when the AI’s job is to generate images that resemble what it’s seen.
Both lawsuits are a warning: AI is no longer a sandbox. It’s the courtroom. Ignorance isn’t a defense.