Mattel is teaming up with OpenAI to create toys powered by AI. While this could bring lots of enjoyment, it also sounds like the beginning of countless tales about things going awry.
For the record, I don’t believe AI will bring about the end of the world. I’ve utilized ChatGPT in countless ways, including as a supportive tool for parenting. It has assisted me in coming up with bedtime stories and creating coloring books, among other tasks. However, that’s my personal usage and not directly involving children.
<
Of course, the official statement is very encouraging. Mattel claims it will introduce the “magic of AI” into playtime, ensuring experiences for kids that are safe, creative, and age-appropriate. OpenAI is excited to support these toys with ChatGPT, and both companies are clearly looking to frame this as a leap forward for childhood play and development.
Yet, it’s hard not to imagine how ChatGPT chats could spiral into bizarre conspiracy theories, with a Barbie doll talking to an eight-year-old, for instance. Or a GI Joe transitioning from positive messages like “knowing is half the battle” to pitching cryptocurrency mining because some six-year-old overheard “blockchain” and thought it sounded like a cool weapon.
Reflecting on the top image, I couldn’t help but think of the movie Small Soldiers, a 1998 cult classic where a toy company executive saves money by inserting military-grade AI chips into action figures, resulting in suburban guerrilla warfare. While that was a satire, it does raise safety concerns about introducing generative AI into toys that kids play with frequently.
I understand the attraction of AI in toys. Barbie could transform from a doll into a witty conversational partner who explains space missions or role-plays in different scenarios. Similarly, a Hot Wheels car might comment on the track created for it. I can even visualize AI being incorporated into Uno as an interactive tool that teaches younger kids strategies and sportsmanship.
However, I believe generative AI like ChatGPT should be off-limits for kids. Even with safety measures, it risks becoming less like true AI and more like a set of programmed responses lacking the adaptability of AI. This means we could miss the bizarre, confusing, and at times inappropriate comments from AI that adults might dismiss but children could internalize.
Toying with AI
Mattel has extensive experience in this field and generally knows what it’s doing with its products. It’s definitely not in their best interest to have toys malfunctioning. The company stated it will prioritize safety and privacy in every AI interaction, promising to create suitable experiences. However, “appropriate” can be a slippery term in the context of AI, particularly for language models trained on vast amounts of internet data.
ChatGPT isn’t specifically designed for children’s toys. Despite efforts to implement guidelines and filters, it remains a learning model that imitates. There’s also a deeper question about the type of relationship we want children to maintain with these toys.
There’s a significant distinction between playing with a doll and imagining conversations with it and having a relationship with a toy that can respond on its own. I don’t expect a doll to turn into a horror character, but mixing playmate and programmed responses can yield unpredictable results.
I use ChatGPT with my son in the same way as I would scissors or glue – a supervised tool for entertainment. In contrast, AI within toys can be challenging to monitor. The doll replies. The car reacts. The toy engages, and kids may not recognize miscommunication simply because they lack experience.
If Barbie’s AI malfunctions, if GI Joe unexpectedly makes grim military references, or if a Hot Wheels car says something strange, a parent might not catch it until after it’s been heard and absorbed. If we’re not ready to let these toys operate unsupervised, they may not be prepared for the market.
This isn’t about completely excluding AI from childhood. It’s about recognizing the difference between what’s beneficial and what’s too risky. I want AI in toys to be very tightly regulated, similar to how toddler-targeted TV shows are crafted to be suitable. Those shows mostly stick to a script, but AI has the capability to create its own narrative.
My concerns may seem overly critical, and history shows us there have been various tech toy controversies. Furbies were unsettling. Talking Elmo had glitches. Talking Barbies have shared sexist comments about math being tough. Most issues could be fixed, though the Furbies might remain a mystery. I do see potential for AI in toys, but I’ll remain cautious until I see how well Mattel and OpenAI navigate the fine line between inadequate AI use and allowing it too much freedom to be an unsuitable virtual companion for children.