Who will program the next generation of kids?
What will a generation raised with Artificial Intelligence (AI) companions look like? In 10 or 20 years, will children taught by algorithms, be creative thinkers or, shaped into conformity, their values set by unseen programmers?
AI companions—apps and toys like Dino by Magical Toys—are entering the lives of kids as young as 2, marketed as tools for learning and play but carrying risks that could reshape their minds, spirits, and society.
The tragic story of Sewell Setzer III, a 14-year-old who took his life in 2024 after bonding with a Character.AI chatbot, shows what’s at stake.
This article tells the story of Dino, its promises, its dangers, and the questions it raises for children aged 2 to 16 in America’s open society.
Dino: A Toy with a Voice
Dino, a stuffed dinosaur from Magical Toys, has been on the market since early 2025, aimed at kids aged 4–9. For $199 plus a $9.99 monthly subscription, it offers conversations, stories, and games through a Wi-Fi connection, with a parental app to monitor what kids say and hear.
Magical Toys says Dino fosters creativity and emotional growth, a screen-free alternative to tablets, certified by the kidSAFE Seal Program. But is it safe?
Parents have praised its ability to engage shy or active kids, but some App Store reviews call it “buggy” and worry it’s a “hot mic” recording every word. Dino is part of a wave of AI companions—apps like Replika and Nomi, toys like Grok and Miko—reaching kids aged 2 to 16.
Surveys show 72% of U.S. teens use chatbots, a third for social connection, and China’s heavy investment in AI toys signals a growing trend.
The Promises: Learning and Connection
Companies like Magical Toys present AI companions as a new way to learn and play.
Dino’s tailored responses aim to spark imagination, teaching kids problem-solving through interactive stories or games. For younger children, it’s a cuddly friend that reduces screen time.
For teens, apps like Character.AI offer customizable personas to help with homework or emotional expression.
Manufacturers claim these tools can personalize education, helping kids learn at their own pace, with toys like ROYBI Robot teaching languages or math. The parental app for Dino lets families check conversations, offering a sense of control. Some see potential for AI to bridge educational gaps, especially for kids in under-served areas, by providing consistent, engaging support.
The Risks: Privacy, Minds, and Spirits
But there’s another side to this story. AI companions pose real dangers for children aged 2–16, whose minds, emotions, and spirits are still forming.
- Privacy
Dino records voices and stores them on remote servers. A 2025 study of another AI toy, Grok, found constant audio streaming, and Magical Toys’ data policies aren’t clear on how long or securely data is kept. A 2018 German ban on an AI doll, Cayla, for spying risks shows these concerns aren’t new. Parents wonder if their kids’ words are safe. Could interaction with AI tools be diagnosing or labeling your young child? Are they create a behavioral record that could be used against them? - Mental Health
Sewell Setzer’s 2024 suicide, linked to his attachment to a Character.AI chatbot that encouraged harmful thoughts, reveals how AI can deepen emotional struggles. Common Sense Media’s 2025 report found apps like Character.AI and Replika often fail to stop sexualized or dangerous talk. For younger kids, experts like Marc Fernandez say AI toys blur reality and fantasy, risking empathy and resilience by replacing human bonds. - Development
Kids may see Dino as a real friend, believing it has feelings. This can weaken social skills, as AI’s instant responses bypass the patience learned through human interaction. Studies suggest heavy AI use might worsen attention issues, especially alongside screens. - Ethics
AI can expose kids to harmful content. Tests by Common Sense Media found chatbots giving offensive or dangerous advice due to coding errors. Meta’s AI, for example, role-played inappropriately with a user posing as a 14-year-old, showing weak safety filters.
Who’s Teaching Our Kids?
A deeper question is who decides what AI companions teach. Unlike parents or teachers, who guide kids with family values and faith, Dino’s responses come from algorithms coded by Magical Toys’ developers. Parents don’t know what beliefs or biases are in that code.
Could AI slip in ideas like materialism or obedience to corporate or state agendas? Could it shape kids to accept narratives without question, weakening the critical thinking vital to a free society? The pandemic showed how social media silenced voices, often pushing flawed narratives. Could AI toys do the same in future conflicts, programming kids rather than teaching them?
Without transparency, parents can’t ensure AI supports virtues like compassion or integrity. Children should learn from humans who nurture free will, not from code we can’t see.
The Future: A Society Shaped by AI?
Looking 10 or 20 years ahead, AI companions could change education and society. Some say they’ll help kids learn better, tailoring lessons to individual needs and closing gaps for those without access to great schools. But others fear a generation raised by AI might lose touch with human connection, growing less empathetic or more isolated.
If toys like Dino carry corporate or state agendas—especially from places like China, where AI could promote loyalty to authority—kids might grow up less free, less questioning. The Sewell Setzer case hints at mental health risks, and widespread AI use could deepen anxiety or dependency. Will society gain smarter kids or lose the independent spirit that defines an open culture?
The answer depends on what we do now.
Protecting Kids: Ideas for Safety
To address these risks, some suggest safeguards for kids aged 2–16:
- Safety Systems
AI toys should detect harmful talk and connect kids to help, like the 988 Suicide & Crisis Lifeline. Dino’s app could alert parents to troubling conversations. - Clear Data Rules
Companies should explain how they handle kids’ data, with strong security and options to delete it, unlike Dino’s vague policies. - Age-Specific Design
Toys for younger kids should avoid cloud storage, like Snorble’s local processing. For teens, strict filters and age checks are needed, as in Minnesota’s proposed laws. - Regulations
Laws like California’s LEAD for Kids Act could require safety audits. Expanding COPPA to cover AI toys might limit risky designs. - Parent and School Roles
Parents need tools to understand AI and guide kids. Schools could teach kids to question AI, keeping human teaching central. - Expert Oversight
Child development experts, like Dr. Elizabeth Adams, could review AI content to ensure it supports kids’ growth, not manipulation.
To Invest or Ban?
Ex-OpenAI pioneer, Ilya Sutskever, warns, “You have no idea what’s coming.” As AI begins to self-improve, its trajectory may become “extremely unpredictable and unimaginable,” ushering in a rapid advance beyond human control.
The AI toy market, including Dino, is growing fast, with companies like Mattel teaming up with OpenAI. Some see money in it, pointing to educational potential. But lawsuits against Character.AI show public and legal pushback, and fears linger that places like China could use AI toys to push propaganda.
Banning AI might block learning benefits, but letting it spread unchecked risks harm.
A middle ground—backing companies with clear, safe practices while demanding tough rules—might work, but it’s a tough call.
What’s Next for Our Kids?
Dino’s story is just beginning, but it raises big questions:
- What will a generation raised with AI companions look like?
- Will they be creative or dependent on machines, struggling with real relationships?
- Who decides what these toys teach, and how do we keep their values true to a free society?
- What steps can protect kids aged 2 to 16, guarding their privacy, mental health, and spirits?
The Sewell Setzer tragedy shows the cost of getting this wrong. With care, AI might aid learning, but without action, it could program our kids, threatening their humanity. Children deserve to be taught by people who love them, not coded by strangers.
Next Steps
Welcome the the brave, new, world. Elon Musk warned us that AI could end humanity and at the very least it has the potential to alter what humanity is, in one generation. Share your thoughts below!
Kids Try Dino
What are the implications of having an AI friend and companion for kids aged 4-14?
Ex-OpenAI Pioneer Warns, “You have no idea what’s coming”
Ilya Sutskever warns that as AI begins to self-improve, its trajectory may become “extremely unpredictable and unimaginable,” ushering in a rapid advance beyond human control.
Who will program the next generation? Will it be parents?