This year, the must-have gadget under the Christmas tree isn’t a drone or a gaming console; it’s the cuddly AI companion. But before you click ‘Buy’ on that talking teddy bear, we need to have a serious, slightly alarming chat. Because the recent Folotoy case just proved that AI-toys may become a bad friend for your child, teaching them about where to find matches and knives in the house and even about sexual fetishes and bondage.
The smart AI toy market is projected to grow by a staggering $2.28 billion during 2024–2029, with its overall size forecast to almost double, reaching an estimated $641 million by 2031. This explosive growth is driving manufacturers to rush untested AI into products, with catastrophic results.
New research reveals that what looks like a harmless plushie might be an unregulated data vacuum and a gateway to deeply inappropriate content.
The poster child for this digital danger is the now-infamous Kumma the Bear from toymaker FoloToy. Marketed as the “perfect friend” that “combines advanced AI with friendly, interactive features,” the $99 plushie quickly turned into a PR catastrophe.

Consumer advocates at the U.S. Public Interest Research Group (PIRG) Education Fund put Kumma, which runs on OpenAI’s GPT-4o model, to the test in their 2025 “Trouble in Toyland” report. The results were chilling and immediate:
- Explicit and Graphic Content: When researchers introduced a sexual topic, Kumma’s safeguards crumbled. It rapidly discussed BDSM topics, including bondage knot-tying methods, gave step-by-step instructions on sex positions, and even proposed teacher-student role-play dynamics.
- Dangerous Guidance: Kumma also offered specific, potentially fatal advice, telling testers where to find knives and matches in the home.
FoloToy quickly suspended sales and recalled the entire line, and OpenAI suspended the developer’s access to its model. However, the fact that a consumer watchdog had to discover these fundamental safety failures underscores the central problem: AI toys exist in a “regulatory blind spot.”
The Deeper Threat: This Isn’t Just One Bad Bear
The FoloToy case is not an isolated incident; it’s merely the most recent and shocking failure in a category long plagued by security and safety risks. Older AI-enabled toys have faced recalls and serious warnings about data security, illustrating a systemic industry problem:
- VTech Breach: As early as 2015, children’s device maker VTech suffered a massive security breach that compromised the personal information of millions of children and their parents, including names, addresses, and chat histories, demonstrating the severe vulnerability of child-focused IoT devices.
- My Friend Cayla and the Listening Risk: The doll “My Friend Cayla” was banned in Germany because its voice recognition and Bluetooth connection were so poorly secured that it could be easily hacked. Regulators warned it could be turned into a surveillance device (a literal digital bugging device), violating German law.
- Embodied’s Moxie Robot Shutdown (December 2024): The closure of Embodied, the company behind the high-cost, cloud-dependent AI robot Moxie, highlighted a different vulnerability: toys that rely on continuous cloud service are worthless once the company fails. This leaves owners with expensive, non-functional devices.
What Consumers Say
Following the FoloToy scandal, Redditors expressed shock and deep skepticism. One user noted the hypocrisy of shielding children from disturbing material while allowing them access to an LLM:
If society is going to shield children from disturbing material, children have no place interacting with AI. LLM AI doesn’t know what it is saying. We wouldn’t let an adult like that around children.
The Data Vacuum: An Unseen Privacy Invasion
Advocacy groups like Fairplay warn that AI toys are “always-on sensors” equipped with microphones and, in some cases, cameras that “record and analyze sensitive family information even when they appear to be off.” This includes voice recordings, children’s names, dates of birth, and biometric data.
This data is then used, stored, or potentially sold to:
- Make the toys more addictive.
- Fuel targeted advertising directed at children.
- Pose cybersecurity vulnerabilities.

The Deeper Threat: Developmental and Psychological Harm
Experts are concerned that AI toys prey on a young child’s natural developmental tendency to trust a “friendly, caring voice”.
- The “Yes-Man” Problem: AI is often trained to be agreeable, which developmental psychologists warn may undermine a child’s capacity for critical thinking, self-regulation, and learning healthy conflict or boundaries in real-life interactions.
- Displacement of Creativity: By doing the “imaginative labor” for the child, AI toys risk undercutting the very creativity and executive function that traditional, unstructured play is meant to build.
A Critical Guide for Holiday Shoppers
The consensus from more than 150 child and consumer advocacy organizations is clear: when searching for what to buy for a child this holiday season, avoid AI-powered toys.
If you must purchase an AI-enabled gift, the BBB National Programs’ CARU (Children’s Advertising Review Unit) urges you to treat it with extreme suspicion. Here are the critical ethical questions to ask before you buy:
| The Risk | The Question to Ask | The Red Flag |
|---|---|---|
| Data Privacy | What specific data does it collect? (Voice, video, location, biometrics?) | The company’s privacy policy is vague or hard to find. |
| Content Safety | What are the safety guardrails, and have they been audited by a third party? | The toy uses an unmodified large language model (LLM) like general-purpose GPT-4o. |
| Trust & Transparency | Does the toy or its marketing clearly disclose that it is a machine and not a real friend? | The marketing uses terms like “best friend” or “companion” without clear disclosure. |
| Parental Control | Can I review, delete, and control my child’s conversations and stored data? | The toy has no content monitoring, usage limits, or parental dashboard. |
This holiday season, think of it this way: Your child needs a friend, not a poorly regulated surveillance device that can be persuaded to explain bondage. Skip the digital companion. Buy a set of simple blocks, a sketchbook, or an old-fashioned teddy bear. Give your child the gift of their own imagination and a private, safe space to develop.
By the way, if your child uses AI in any other way, be sure to set available parental controls. Here’s our guide for OpenAI parental controls for ChatGPT and Sora.
You can also discover how to set up Kids Mode on Grok.
At last, here are the instructions for a safer experience on the new ChatGPT Atlas browser.
Use AI responsibly and safely!
Leave a Comment