There is a nine person team inside Anthropic with one job: figure out how AI is impacting people and society. Their goal is to study how generative AI is affecting everything from mental health, work, the economy, and beyond.
Can a team working for an AI company be fully honest about what they find?
What This Team Is Built To Do
Compared to the over 2,000 employees at Anthropic, the societal impacts group is tiny. The team is encouraged to publish findings that are, in their own words, “uncomfortable truths”. They look for ways AI could manipulate people, or show patterns of misuse.
The team was built up over several years by researcher Deep Ganguli. It includes people with backgrounds in economics, qualitative research, data science, and policy. It’s important for the team to be interdisciplinary because studying AI’s impact requires more than benchmark tests. It requires an understanding of how it could influence the world.
They run internal and external studies to see how workers, creatives, and everyday users experience tools like Claude once they close the chat window. They study productivity boosts alongside skill atrophy. They interview thousands of people to understand whether AI is expanding their creativity or flattening it. They also run internal systems that spot harmful usage patterns so they can measure whether safeguards work in practice, not just in theory.
This Team’s Existence is Risky
It’s a big role where every finding could have a major impact on not just Anthropic and its AI Claude. It could affect the AI industry on a large scale.
Especially when there’s been a number of lawsuits filed by families claiming their children became suicidal after they interacted with AI chatbots. This is on top of rising fears of an AI bubble that could burst at any minute and anti-AI sentiment.
When an AI company says it has a team studying its impact, people want to know if they’re being sincere or if it’s just empty marketing. Especially when the company was founded by former employees of OpenAI who left because they believed safety wasn’t taken seriously. That history puts even more pressure on Anthropic to live up to its own values.
Can This Team Really Be Honest?
The truth is complicated. On paper, the team is independent. They have said that leadership supports them and doesn’t interfere with their research. Their work will be shared publicly, which puts Anthropic in a tough spot if the company tries to silence them. Plus, there’s always the possibility of resignations if their independence is threatened.
Independence inside a company has its costs. The team still relies on Anthropic for budgets, access to data, and to sign off on certain projects. If the team’s research threatens Anthropic’s goal of getting people to use Claude, what’s stopping them from putting their foot down?
There is also the external pressure. Regulators, political leaders, and the rest of the AI industry are watching this team very closely. That kind of scrutiny could influence what gets published.
How The Rest of Us Should Read Their Work
For now, it’s best to see this team’s research as sincere until they give us a reason to question it. They’re trying to keep AI safe and they’re doing it within a company structure that can only stretch so far.
When you’re reading their work, look for the balance. Do they list the harms as clearly as they name the benefits? Do they spend time on the hardest topics or mention them briefly? Do they give enough information for outside researchers to critique or replicate their findings?
These are the questions that show whether a company is willing to tell the whole truth or only the parts that feel safe.
Yes, this “societal impacts” team can be honest. If Anthropic doesn’t forget its core mission and the team keeps their independence, there shouldn’t be any problems.
The public needs to know how AI is affecting their minds, their work, their communities, and their well being. Generative AI isn’t going anywhere, so it needs to be safe for anyone to use.