Connecticut lawmakers are set to address rising concerns regarding children’s privacy and the use of artificial intelligence (AI) technologies, particularly chatbots. On February 5, 2026, Attorney General William Tong and Senator James Maroney announced intentions to introduce new regulations aimed at safeguarding minors from the potential harms associated with these technologies. This initiative comes in response to increasing scrutiny over the ways young people interact with AI tools.
The announcement coincided with the release of the 2025 Connecticut Data Privacy Act (CTDPA) Enforcement Report, which highlights recent enforcement actions and sets priorities for the Attorney General’s Office. Notably, the report dedicates a section to chatbots and notes that the AG’s Office is currently investigating a technology company’s chatbot platform for alleged risks it poses to minors due to certain design features. The report emphasizes the need for additional regulations specifically targeting the unique vulnerabilities of children and teenagers.
Amid growing alarm about the impact of AI technologies on youth, Attorney General Tong has joined a bipartisan coalition of 42 other Attorneys General, urging major AI software companies to strengthen quality controls and implement safeguards for chatbot products. The coalition warns that the rapid pace of innovation in this sector poses significant risks to children’s health and well-being. Existing laws, including the CTDPA and Connecticut’s unfair trade practices act, already apply to chatbot providers; however, officials contend that these measures are insufficient given the new challenges posed by AI.
The implications of the Enforcement Report suggest that Connecticut lawmakers will likely explore further regulatory frameworks for AI-driven chatbots, especially those aimed at minors. Companies engaged in the development or deployment of such technologies may face increased scrutiny and potential compliance requirements as the legislative process unfolds. This focus on regulatory measures aims to foster a safer digital environment for younger users.
As the conversation surrounding AI and children’s privacy continues to evolve, Connecticut’s proactive stance may set a precedent for other states grappling with similar issues. The proposed regulations signify a growing recognition of the need to balance technological advancements with the protection of vulnerable populations, particularly minors. With lawmakers preparing to draft specific legislation, the outcome could influence future AI development and implementation practices not only in Connecticut but also across the nation.
See also
AI Shifts Political Power from Governments to Tech Giants, Raising Global Concerns
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case





















































