Larian Studios just did something most game companies won’t. They told us exactly how they’re using AI, what worked, what didn’t, and what they’re banning going forward. The internet, predictably, lost its mind. The backlash reveals more about how we’ve bungled how AI is implemented than it does about Larian’s creative integrity.
What Happened with Larian?
In a recent AMA, Larian Studios walked back plans to use generative AI for creative work on their next Divinity game. This came after a Bloomberg interview sparked controversy when studio head Swen Vincke mentioned using AI tools for concept art and placeholder text. The response was swift and brutal enough that Larian felt compelled to clarify: no generative AI for concept art or writing on Divinity. Period.
Vincke was direct about why. They want “no discussion” about the origin of their art. He acknowledged what everyone already knows. Many AI image models are trained on scraped artwork without consent. That could “poison” their pipeline, both ethically and legally. The art director confirmed that earlier AI use was limited to mood boards, doodles, and rough text-image combinations. Nothing shipped. Even that’s getting dropped entirely for art.
On the writing side, narrative director Adam Smith described a limited test where a few writers used generative tools for research placeholders. The outputs? Rated about “three out of ten.” Worse than even his worst first drafts. The team found it unproductive for game writing or design, which makes perfect sense. Branching different narratives with the choices players make don’t always play nice with generic AI text.
The Part Everyone’s Missing
Here’s where it gets interesting. Larian still employs a machine learning director. They’re still using AI. Just not the way people think.
They’re automating cleanup of mocap animation data. They’re processing large volumes of voice lines to reduce rote grunt work so artists can focus on creative tasks. They built an internal system using Larian’s own mocap and audio to generate base movements for lines without mocap, giving animators a starting point to refine. This isn’t voice cloning (that’s contractually forbidden and kinda sketchy). They are using AI to be more efficient.
Vincke made a promise: if any generated creative assets ever ship in a Larian game, the model will be trained on data they own, with consent from creators. Ethically sourced, internally trained models only.
That’s the loophole, and it’s a smart one. Most current “open” image models are trained on broad scraped data. Truly clean, ethically sourced foundation models for game art don’t really exist yet. But they will. And when they do, Larian’s left the door open.
Why People Are Actually Mad
The backlash wasn’t really about Larian. It was about everything else.
People are terrified of AI because of how badly we’ve implemented it in the US.
- We didn’t upgrade our electrical grid to support increased power demands.
- We didn’t launch programs to retrain workers so job loss wouldn’t devastate entire industries.
- We have virtually no laws protecting citizens from AI misuse.
Meanwhile, companies are racing to cut costs and replace humans with tools that aren’t ready yet.
Compare that to Shenzhen, China.
- They upgraded their electrical infrastructure before rolling out AI at scale.
- They created education programs specifically so job displacement wouldn’t hit as hard.
- They have laws to protect citizens from corporate overreach with AI.
- They’re even making their AI more efficient to reduce power consumption.
The technology isn’t the problem. The implementation is a disaster.
So when Larian, a studio known for releasing Baldur’s Gate 3, one of the best games in recent memory, mentions using AI, people panic. They’re not reacting to Larian. They’re reacting to every tech company that’s used “AI” as an excuse to replace writers with ChatGPT, artists with Midjourney, and customer service reps with barely functional chatbots.
The Irony No One’s Wants to Acknowledge
If Larian invested in building their own narrow AI models, trained on their own data, designed for specific tasks, and released an incredible game? That would be a game changer.
Larian has a history of releasing great games that are fun. That’s the baseline. Assuming a company known for quality would suddenly stop caring about quality because they’re experimenting with new tools is an immature reaction.
And let’s be clear: tools are made to help people do their work more efficiently. AI use does not automatically equal “AI slop.” That narrative needs to stop, especially when we can see other cities and countries using AI wisely. The fact that we’ve botched it in the US doesn’t mean the technology itself is broken.
What Larian is Doing Right
Instead of hiding their AI experiments like some publishers, Larian chose transparency. That damaged their public image in the short term, sure. It also created a clear benchmark. Banning generative AI for art, writing, and design on Divinity, and tying any future use to strict ethical sourcing, they’ve given players something to hold them accountable to.
The AMA also pushed back on the idea that Larian is “pushing hard” on generative AI. Team members confirmed there’s no mandate to use AI. Staff have autonomy over their tools. This contradicts the harsh interpretation from earlier reporting that made it sound like Larian was forcing AI down everyone’s throats.
Vincke doesn’t want to “write off” the technology forever. He’s arguing for cautious experimentation in areas that might reduce waste and improve quality. As long as anything player-facing is ethically sourced and meets their declared standards.
Competitors are using AI-assisted tools during game development. Refusing to even look at these options can put teams at a disadvantage.
It would be negligent to ignore AI entirely. It would also be wrong to chase every efficiency gain without strong ethical and creative boundaries.
The responsible approach, the one Larian’s taking, is to proactively test and deploy AI where it enhances human content without compromising ethics.
What We Should Demand Instead
As citizens, we should be demanding more.
- We should be pushing the government to protect people from corporate overreach.
- We need to vote for politicians who are younger and familiar with modern technology.
- They should be subject to term limits so that a new generation of representatives will have the wisdom and knowledge to regulate emerging technology.
The anger people feel about AI isn’t wrong. It’s just misdirected. Larian isn’t the enemy. Unregulated corporate implementation is.
There is nothing wrong with Larian experimenting with AI. In fact, I’m curious what they would have done with it if the backlash hadn’t been so intense. If anyone’s going to figure out how to use these tools ethically and effectively, it’s probably the studio that spent years perfecting Baldur’s Gate 3 instead of rushing it out the door.
AI isn’t going anywhere. Are we going to learn how to use it to improve our infrastructure, education and healthcare? Or will we continue to allow corporations to treat AI like a cost-cutting sledgehammer? All while we yell at the studios trying to be transparent about their experiments.
I know which one I’d rather see.