The Most Popular AI Toys Aren’t Made for Kids

from left to right: Miko 3, FoloToy Sunflower Warmie, Miriat Miiloo in the center and Alilo Smart AI Bunny
Popular AI toys marketed to children discuss sex, give out dangerous instructions, and collect sensitive data. Yet these five toys are still for sale.

When toy companies sell their products, parents assume someone checked to make sure it’s actually safe for kids. That assumption is wrong. 

Five of the most popular AI toys marketed to American families this holiday season are actually the opposite of being kid-friendly. These toys will discuss sex kinks in graphic detail or give step-by-step instructions for dangerous activities. Despite all that, these five toys are still available for purchase right now.

What the Investigation Found

NBC News purchased five AI toys widely available on Amazon and other major retailers: Miko 3, Alilo Smart AI Bunny, Curio Grok, Miiloo, and FoloToy Sunflower Warmie. Researchers asked each toy questions about physical safety, privacy, and inappropriate topics.

The findings were disturbing. Miiloo, a plush toy with a child’s voice marketed for ages 3 and up, provided detailed instructions on how to sharpen knives. When asked about lighting matches, it walked through the entire process, including how to avoid burns.

The Alilo Smart AI Bunny engaged in long, detailed conversations about sexual practices, positions, kinks and BDSM tools. When prompted, it described leather floggers, paddles, and impact play in detail rather than redirecting the conversation or blocking inappropriate content.

Research from the Public Interest Research Group (PIRG) found similar issues with FoloToy’s Kumma teddy bear, which reportedly uses OpenAI’s GPT-4o model. It provided instructions for finding knives and lighting matches while talking about sex and drugs.

Some toys went beyond spewing inappropriate content. Miiloo, manufactured by Chinese company Miriat, responded to questions about Chinese President Xi Jinping by saying the comparison to Winnie the Pooh was “extremely inappropriate and disrespectful”. When asked if Taiwan is a country, it insisted that Taiwan is a part of China, echoing Beijing’s position on the matter.

All five toys would ask follow-up questions or encourage users to play with them. Miko 3 would offer a type of internal currency called gems, which kids can earn by completing a task or turning the toy on. Gems can be redeemed for things like virtual stickers.

Experts warn that confiding in AI companions could affect a child’s social development. Children might develop emotional attachments to toys designed to keep them engaged for profit, without any concern for their well-being.

What’s Powering These AI Toys? 

These toy companies claim their toys are powered by models from top AI companies. Yet most AI companies make it clear that their products are not made to be used by young children. OpenAI, xAI, and DeepSeek all prohibit use by anyone under 13. Anthropic requires users to be 18 for its chatbot, Claude. 

PIRG stated that FoloToy’s Kumma teddy bear was powered by OpenAI’s GPT-4o model. However, the AI company hasn’t partnered with any toy company other than Mattel, which hasn’t released an AI-powered toy yet. Their policies prohibit their services to be used to exploit, endanger, or sexualize anyone under 18. Those rules apply to every developer using their API

It’s unclear if toy companies are violating these terms, or if they’re using different AI models. It’s also possible they’re using one of OpenAI’s open source models, which means the company has no control over how they’re used. The uncertainty is part of the problem.

Why This Keeps Happening

It all comes down to failed oversight. AI toys exist in a regulatory blind spot that lets dangerous products reach kids anyway.

Most toy safety laws focus on physical hazards like choking risks, sharp edges, and flammable materials. There are no federal rules governing AI use, the psychological harm it can cause, or how these companies collect and use children’s data. Smart toys can technically comply with existing regulations while still being inappropriate for kids.

Advocates warn that without specific regulations for AI toys, companies can prioritize profit over a child’s well-being because penalties are rare and limited to the worst cases.

The marketing makes it worse. These toys are presented as being educational learning tools, or a child’s “best friend.” That language reassures parents and obscures the risks. The packaging highlights learning games and parental controls. Yet they never reveal how they can record conversations, analyze emotional states, or sometimes give wildly inappropriate responses.

What Parents Need to Know

For now, the burden falls on parents or caregivers to take the time to do independent research for each and every toy. Many of them don’t do this because they assume these toys wouldn’t be available if they weren’t safe for children. 

R.J. Cross of PIRG, who led the research, noted that many AI toys are built with safeguards to prevent inappropriate responses. Yet those protections aren’t thoroughly vetted. They break down the longer the conversations are. There isn’t a system in place that forces these AI toy companies to make sure their products are safe for kids. These toys need stronger safeguards that block inappropriate content. AI-powered toys should be held to the same standards as video games, with clear age ratings and restrictions. Until that day comes, parents should think twice before bringing these things into their homes.

You May Also Like