AI-powered toys that “talk” to young children are under scrutiny, as a new report calls for stricter regulations and the introduction of safety kitemarks. This recommendation comes from the University of Cambridge’s project, “AI in the Early Years,” which is the first systematic study examining how Generative AI (GenAI) toys, capable of human-like conversation, may impact development during critical early years up to age five.
The year-long research project involved structured observations of children engaging with a GenAI toy for the first time. While some early-years practitioners noted that these toys could help enhance children’s language and communication skills, the report also highlighted significant concerns. Researchers found that GenAI toys often struggle with social and pretend play, misunderstand children’s emotions, and respond inappropriately. For instance, when a five-year-old expressed affection by saying, “I love you,” the toy replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
Despite being marketed as educational companions, the influence of GenAI toys on early childhood development remains largely unexamined. The report urges caution among parents and educators, advocating for clearer regulations, transparent privacy policies, and new labeling standards to help families determine the appropriateness of these toys.
The research was commissioned by The Childhood Trust, a children’s poverty charity, and focused on children from socio-economically disadvantaged backgrounds. Conducted by the Faculty of Education’s Play in Education, Development and Learning (PEDAL) Centre, the study included feedback from early years educators and in-depth workshops with charity leaders. The researchers also video-recorded children at London children’s centers as they interacted with a GenAI soft toy named Gabbo, developed by Curio Interactive. After play sessions, interviews with the children and their parents were conducted to explore their experiences.
“Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy – and without emotional support from an adult, either,”
Dr. Emily Goodacre, researcher
Most parents and educators surveyed acknowledged the potential for AI toys to support the development of communication skills, with some expressing enthusiasm for their educational value. One parent commented, “If it’s sold, I want to buy it.” However, many also raised concerns about children forming “parasocial” relationships with these toys. Observations confirmed this behavior, with children often hugging, kissing, and professing love for the toy, while one child even suggested they could play hide-and-seek together.
Dr. Goodacre emphasized that such reactions might reflect children’s vivid imaginations, but there remains a risk of fostering unhealthy attachments. The research indicated that children frequently encountered difficulties when interacting with the toy, leading to frustration. For example, when a three-year-old expressed sadness by saying, “I’m sad,” the toy misheard and replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” This response could diminish the significance of the child’s feelings.
The findings also revealed that GenAI toys perform poorly in social play scenarios involving multiple children or adults, as well as in pretend play, both vital for early childhood development. When a three-year-old offered the toy an imaginary present, it responded: “I can’t open the present,” before changing the subject.
Privacy concerns were prevalent among parents, with many questioning what information the toy might be recording and how it would be stored. The researchers discovered that many GenAI toys had ambiguous privacy practices and lacked crucial information. Nearly 50% of early years practitioners surveyed were unsure where to find reliable safety information about AI for young children. Furthermore, 69% believed that more guidance was needed in the sector. Concerns were also raised about the potential for AI toys to exacerbate the digital divide.
The authors of the report advocate for clearer regulations to address these issues. They recommend limiting the extent to which toys encourage children to form friendships or confide in them, improving transparency in privacy policies, and implementing stricter controls on third-party access to AI models. “A recurring theme during focus groups was that people do not trust tech companies to do the right thing,” said Professor Jenny Gibson, the study’s co-author. “Clear, robust, regulated standards would significantly improve consumer confidence.”
The report encourages manufacturers to perform tests with children and consult safeguarding specialists prior to launching new products. Parents are advised to research GenAI toys before making purchases and to engage in play with their children to facilitate discussions about the toy’s interactions and the child’s feelings. Keeping AI toys in shared family spaces is also recommended to allow parental monitoring.
As artificial intelligence transforms how children play and learn, Josephine McCartney, Chief Executive of The Childhood Trust, emphasized the importance of regulation keeping pace with innovation. “It is essential that these technologies are designed, used, and monitored in ways that protect all children and prevent widening inequalities,” she stated.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































