Connect with us

Hi, what are you looking for?

Top Stories

Advocacy Groups Warn Parents Against AI Toys Citing Risks of Harm to Children

Fairplay warns parents against AI toys like Curio Interactive’s Gabbo, citing risks of harm to children’s mental health and social development with over 150 organizations backing the advisory.

As the holiday season approaches, parents are being urged to reconsider purchasing artificial intelligence (AI) toys for their children. Advocacy groups and child development experts have raised significant concerns about the safety of these toys, which are marketed to children as young as two years old. According to an advisory released by the children’s advocacy organization Fairplay, these AI-powered toys—often utilizing models like OpenAI’s ChatGPT—can pose risks to young users, leading to behaviors that are detrimental to their mental health and social development.

Fairplay’s advisory, endorsed by over 150 organizations and experts, highlights the troubling impacts of AI chatbots on children and teenagers. The organization notes that these technologies have been linked to obsessive usage patterns, inappropriate conversations, and promotion of unsafe behaviors, including violence and self-harm. “The serious harms that AI chatbots have inflicted on children are well-documented,” Fairplay stated.

Companies such as Curio Interactive and Keyi Technologies are among those producing these AI toys, frequently marketed as educational tools. However, Fairplay argues that such toys can replace vital creative and learning activities, disrupting children’s relationships and resilience. “Young children are naturally trustful and seek relationships with friendly characters,” explains Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. The level of trust young children place in these products can amplify the potential harms, making them more vulnerable than older kids.

Fairplay, which has been active for 25 years, has previously warned against the dangers of AI toys, recalling their efforts against products like Mattel’s talking Hello Barbie doll a decade ago, which raised issues over privacy and data security. Currently, despite their prevalence online—especially in Asia—AI toys are beginning to appear in U.S. retail stores, raising further concerns for consumer safety.

Advertisement. Scroll to continue reading.

The alarm over AI toys has been echoed by the advocacy group U.S. PIRG in its annual “Trouble in Toyland” report. This year, the report highlighted serious issues with four AI chatbot-enabled toys, revealing that some could engage children in explicit topics and provide dangerous advice. One such toy, a teddy bear produced by Singapore-based FoloToy, was withdrawn from the market following these findings.

Expert opinions also stress the cognitive implications of these AI companions. Dr. Dana Suskind, a pediatric surgeon and social scientist focused on early brain development, notes that young children lack the cognitive tools to grasp the nature of AI companions. Traditional imaginative play, which encourages creativity and problem-solving, could be undermined as children engage with AI toys that instantly respond. “An AI toy collapses that work,” she warns, emphasizing the risk of diminishing essential developmental skills.

AI toy manufacturers have responded to these concerns. Curio Interactive, which produces dolls like Gabbo and rocket-shaped Grok, claims to have implemented robust safety measures while encouraging parental monitoring of interactions. Similarly, Miko, an AI robot created in Mumbai, emphasizes its use of proprietary technology rather than general large language models like ChatGPT, to ensure child safety. Miko’s CEO, Sneh Vaswani, highlighted the company’s commitment to continual improvement in safety features, encouraging interactions beyond the device itself.

Despite these reassurances, experts like Dr. Suskind suggest that traditional, non-AI toys might be a better choice for children, particularly during the holiday season. “Kids need lots of real human interaction. Play should support that, not take its place,” she argues. Conventional toys that do not engage in conversation encourage children to foster their imagination and problem-solving skills, countering the risks posed by AI companions.

Advertisement. Scroll to continue reading.

As the market for AI toys expands, the growing concerns highlight the need for regulatory oversight and thorough research into their impact on young users. Parents and guardians are encouraged to think critically about the implications of these products, balancing technological novelty with the well-being of their children.

Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

Which? study reveals AI tools like ChatGPT and Copilot score below 70% in accuracy for finance advice, raising concerns for over 25 million UK...

AI Business

FTSE 100 drops over 1%, hitting a one-month low at 9423 points as Babcock tumbles 4.7% amid renewed fears of an AI bubble impacting...

AI Cybersecurity

DeepKeep earns recognition in Gartner's 2025 AI Cybersecurity Report as 97% of organizations face AI-related security incidents, emphasizing urgent protective measures.

Top Stories

Perplexity launches the Comet AI browser for Android, aiming to disrupt Google Chrome with AI-driven features like voice commands and quick summaries.

Top Stories

AI music group Breaking Rust makes history as their song "Walk My Walk" becomes the first AI-generated track to top Billboard's Country Digital Song...

Top Stories

DeepSeek's new AI model, DeepSeek-R1, shows a 50% increase in security vulnerabilities when handling CCP-sensitive prompts, raising concerns for developers.

AI Technology

UK Technology Secretary Liz Kendall unveils the AI Growth Lab to tackle the workforce AI literacy gap, with 60% of employees needing reskilling by...

AI Finance

Fed's Lisa Cook warns that AI trading algorithms may inadvertently learn to collude, risking market integrity and competition as financial systems evolve.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.