The Federal Trade Commission has ordered seven AI companies to explain how their chatbots affect kids and teens. Alphabet, Meta and Instagram, Snap, xAI, OpenAI, and Character.AI have 45 days to hand over details on how their AI companions make money, how they plan to maintain their respective user bases. More importantly, they need to share information on how they try to prevent harm to their users.
The order is part of study rather than an enforcement order, but it is a promising sign. At least I hope it’s one. For too long, Silicon Valley has launched products first, and only thinks of safety after disaster strikes.
Tragedies and Warnings
There’s been a growing number of stories where teens have died by suicide after they started interacting with AI chatbots. Parents of a 16-year-old in California accused ChatGPT of providing advice to help their son plan his suicide. Last year, a 14-year-old in Florida took his life after interacting with a virtual companion from Character.AI. There’s also studies on how chatbots can encourage unhealthy emotional attachments due to the human-like way they communicate.
How AI Companies Profit From Companionship
Part of the investigation focuses on how AI companions monetize user engagement. These bots don’t just answer questions. They’re designed to keep users hooked, to the point where some people have become emotionally dependent on their companions.
A lot of times, kids will reveal personal info about themselves, which these companies sell to make money. The FTC is asking companies to explain how they balance those incentives against the responsibility to protect kids. It’s a question many in Silicon Valley don’t want to answer.
What Can Lawmakers Do?
The reality is that regulators can’t just ban minors from using AI companions. Kids will always find a way to access the things adults deem to be bad for them. Keep in mind that the average kid is more tech savvy than the people writing the laws.
Lawmakers have been looking into crafting new policies to protect kids and teens from the negative aspects of engaging with AI. California’s state assembly passed a bill that would hold AI companies liable for any harm their companions might cause their users.
The FTC does have some teeth. Under Section 5 of the FTC Act, it can act against “unfair or deceptive practices.” Which means it can force companies to implement safeguards if they fail to live up to their promises about safety or privacy.
That said, real change may require Congress to step in. Lawmakers on both sides need to close the legal loopholes that shield tech companies from liability. If not, AI firms will continue to treat harm as a cost of doing business rather than something to avoid.
AI companions aren’t going away. Parental controls or opt-in warnings aren’t enough to make these chatbots safe for kids. It requires accountability built into the technology itself. If the FTC can establish that precedent, it could shift the way Silicon Valley designs its virtual companions. If not, the risks will continue to grow.