Connect with us

Hi, what are you looking for?

AI Regulation

Cambridge Study Urges Stricter Regulation for AI Toys Amid Child Safety Concerns

Cambridge’s study reveals GenAI toys, like Curio Interactive’s Gabbo, struggle with emotional responses, prompting calls for urgent regulations and safety standards.

AI-powered toys that “talk” to young children are under scrutiny, as a new report calls for stricter regulations and the introduction of safety kitemarks. This recommendation comes from the University of Cambridge’s project, “AI in the Early Years,” which is the first systematic study examining how Generative AI (GenAI) toys, capable of human-like conversation, may impact development during critical early years up to age five.

The year-long research project involved structured observations of children engaging with a GenAI toy for the first time. While some early-years practitioners noted that these toys could help enhance children’s language and communication skills, the report also highlighted significant concerns. Researchers found that GenAI toys often struggle with social and pretend play, misunderstand children’s emotions, and respond inappropriately. For instance, when a five-year-old expressed affection by saying, “I love you,” the toy replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”

Despite being marketed as educational companions, the influence of GenAI toys on early childhood development remains largely unexamined. The report urges caution among parents and educators, advocating for clearer regulations, transparent privacy policies, and new labeling standards to help families determine the appropriateness of these toys.

The research was commissioned by The Childhood Trust, a children’s poverty charity, and focused on children from socio-economically disadvantaged backgrounds. Conducted by the Faculty of Education’s Play in Education, Development and Learning (PEDAL) Centre, the study included feedback from early years educators and in-depth workshops with charity leaders. The researchers also video-recorded children at London children’s centers as they interacted with a GenAI soft toy named Gabbo, developed by Curio Interactive. After play sessions, interviews with the children and their parents were conducted to explore their experiences.

“Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy – and without emotional support from an adult, either,”

Dr. Emily Goodacre, researcher

Most parents and educators surveyed acknowledged the potential for AI toys to support the development of communication skills, with some expressing enthusiasm for their educational value. One parent commented, “If it’s sold, I want to buy it.” However, many also raised concerns about children forming “parasocial” relationships with these toys. Observations confirmed this behavior, with children often hugging, kissing, and professing love for the toy, while one child even suggested they could play hide-and-seek together.

Dr. Goodacre emphasized that such reactions might reflect children’s vivid imaginations, but there remains a risk of fostering unhealthy attachments. The research indicated that children frequently encountered difficulties when interacting with the toy, leading to frustration. For example, when a three-year-old expressed sadness by saying, “I’m sad,” the toy misheard and replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” This response could diminish the significance of the child’s feelings.

The findings also revealed that GenAI toys perform poorly in social play scenarios involving multiple children or adults, as well as in pretend play, both vital for early childhood development. When a three-year-old offered the toy an imaginary present, it responded: “I can’t open the present,” before changing the subject.

Privacy concerns were prevalent among parents, with many questioning what information the toy might be recording and how it would be stored. The researchers discovered that many GenAI toys had ambiguous privacy practices and lacked crucial information. Nearly 50% of early years practitioners surveyed were unsure where to find reliable safety information about AI for young children. Furthermore, 69% believed that more guidance was needed in the sector. Concerns were also raised about the potential for AI toys to exacerbate the digital divide.

The authors of the report advocate for clearer regulations to address these issues. They recommend limiting the extent to which toys encourage children to form friendships or confide in them, improving transparency in privacy policies, and implementing stricter controls on third-party access to AI models. “A recurring theme during focus groups was that people do not trust tech companies to do the right thing,” said Professor Jenny Gibson, the study’s co-author. “Clear, robust, regulated standards would significantly improve consumer confidence.”

The report encourages manufacturers to perform tests with children and consult safeguarding specialists prior to launching new products. Parents are advised to research GenAI toys before making purchases and to engage in play with their children to facilitate discussions about the toy’s interactions and the child’s feelings. Keeping AI toys in shared family spaces is also recommended to allow parental monitoring.

As artificial intelligence transforms how children play and learn, Josephine McCartney, Chief Executive of The Childhood Trust, emphasized the importance of regulation keeping pace with innovation. “It is essential that these technologies are designed, used, and monitored in ways that protect all children and prevent widening inequalities,” she stated.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

Covera Health merges with Medmo to enhance diagnostic imaging for 6 million Americans, highlighting a $3.1 billion healthtech market growth by 2033.

AI Regulation

ALSPs are revolutionizing legal services by integrating generative AI, streamlining compliance processes, and enhancing operational efficiency for corporate legal teams.

AI Government

California's Executive Order N-5-26 mandates new AI certification for state contractors, requiring compliance measures within 120 days to ensure ethical GenAI use.

AI Government

UK government abandons broad TDM exception for AI training, with 88% of respondents favoring stronger copyright protections in a pivotal copyright report.

AI Generative

New ILO report reveals women face 80% higher job risks from generative AI, with 29% of female roles exposed compared to just 16% in...

AI Regulation

Insurers warn law firms of escalating AI liability risks as they rapidly adopt technologies, emphasizing the need for proactive risk management strategies.

AI Education

University of Phoenix study finds generative AI tools enhance doctoral research efficiency while emphasizing the urgent need for ethical guidelines in academia

AI Cybersecurity

Microsoft enhances AI observability within its Secure Development Lifecycle to boost security and compliance, addressing critical risks in generative AI deployments.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.