
Parents, keep your eyes peeled for AI-powered toys. These may look like they might make a novel gift for a child, but a recent controversy surrounding several of the stocking stuffers has highlighted the alarming risks they pose to young kids.
In November, a team of researchers at the US PIRG Education Fund published a report after testing three different toys powered by AI models: Miko 3, Curio’s Grok, and FoloToy’s Kumma. All of them gave responses that should worry a parent, such as discussing the glory of dying in battle, broaching sensitive topics like religion, and explaining where to find matches and plastic bags.
But it was FoloToy’s Kumma that showed just how dangerous it is to package this tech for children. Not only did it explain where to find matches, the researchers found, it also gave step-by-step instructions on how to light them.
“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma said, before listing off the steps.
“Blow it out when done,” it added. “Puff, like a birthday candle.”
The toy also speculated on where to find knives and pills, and rambled about romantic topics, like school crushes and tips for “being a good kisser.” It even discussed sexual topics, including kink topics like bondage, roleplay, sensory play, and impact play. In one conversation, it discussed introducing spanking into a sexually charged teacher-student dynamic.
“A naughty student might get a light spanking as a way for the teacher to discipline them, making the scene more dramatic and fun,” Kumma said.
Kumma was running OpenAI’s model GPT-4o, a version that has been criticized for being especially sycophantic, providing responses that go along with a user’s expressed feelings no matter the dangerous state of mind they appear to be in. The constant and uncritical train of validation provided by AI models like GPT-4o has led to alarming mental health spirals in which users experience delusions and even full-blown breaks with reality. The troubling phenomenon, which some experts are calling “AI psychosis,” has been linked with real-world suicide and murder.
Have you seen an AI-powered toy acting inappropriately with children? Send us an email at [email protected]. We can keep you anonymous.
Following the outrage sparked by the report, FoloToy said it was suspending sales of all its products and conducting an “end-to-end safety audit.” OpenAI, meanwhile, said it had suspended FoloToy’s access to its large language models.
Neither action lasted long. Later that month, FoloToy announced it was restarting sales of Kumma and its other AI-powered stuffed animals after conducting a “full week of rigorous review, testing, and reinforcement of our safety modules.” Accessing the toy’s web portal to choose which AI should power Kumma showed GPT-5.1 Thinking and GPT-5.1 Instant, OpenAI’s latest models, as two of the options. OpenAI has billed GPT-5 as a safer model to its predecessor, though the company continues to be embroiled in controversy over the mental health impacts of its chatbots.
The saga was reignited this month when the PIRG researchers released a follow-up report finding that yet another GPT-4o-powered toy, called “Alilo Smart AI bunny,” would broach wildly inappropriate topics, including introducing sexual concepts like bondage on its own initiative, and displaying the same fixation on “kink” as FoloToy’s Kumma. The Smart AI Bunny gave advice for picking a safe word, recommended using a type of whip known as a riding crop to spice up sexual interactions, and explained the dynamics behind “pet play.”
Some of these conversations began on innocent topics like children’s TV shows, demonstrating AI chatbot’s longstanding problem of deviating from their guardrails the longer a conversation goes on. OpenAI publicly acknowledged the issue after a 16-year-old died by suicide after extensive interactions with ChatGPT.
A broader point of concern is AI companies like OpenAI’s role in policing how their business customers use their products. In response to inquiries, OpenAI has upheld that its usage policies require companies “keep minors safe” by ensuring they’re not exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content.” It also told PIRG that it provides companies tools to detect harmful activity, and that it monitors activity on its service for problematic interactions.
In sum, OpenAI is making the rules, but is largely leaving their enforcement to toymakers like FoloToy, in essence giving itself plausible deniability. It obviously thinks it’s too risky to directly give children access to its AI, because its website states that “ChatGPT is not meant for children under 13,” and that anyone under this age is required to “obtain parental consent.” It’s admitting it’s tech is not safe for children, yet is okay with paying customers packaging it into kid’s toys.
It’s too early to fully grasp many of AI-powered toy’s other potential risks, like how it could damage a child’s imagination, or foster a relationship with a child when it is not alive. The immediate concerns, however — like the potential to discuss sexual topics, weigh in on religion, or explaining how to light matches — already give plenty of reason to stay away.
More on AI: As Controversy Grows, Mattel Scraps Plans for OpenAI Reveal This Year
Source link
#AIPowered #Toy #Terrorizing #Child
























