Several U.S. states are advancing legislation aimed at regulating the use of artificial intelligence (AI) in various sectors, reflecting growing concerns over safety, privacy, and ethical implications. In Georgia, HB 171 seeks to prohibit the distribution of computer-generated child sexual abuse material (CSAM). This bill has been recommitted to the Senate as of January 12, indicating ongoing discussions regarding its implications.
In Hawaii, a suite of AI-related bills are currently under consideration. Notably, HB 1782 aims to establish safeguards for interactions between minors and AI companion systems, ensuring oversight and penalties for misuse. Additionally, HB 1787 seeks to restrict the use of AI in health insurance decision-making processes, while SB 2585 mandates the Department of Health to create an AI-enhanced online clearinghouse for evidence-based treatment programs. Another significant proposal, SB 2076, focuses on protecting individuals’ rights from AI deepfakes. These measures highlight Hawaii’s proactive approach in addressing the multifaceted challenges posed by AI.
Idaho has introduced SB 1227, aimed at regulating the incorporation of generative AI in public education settings. Meanwhile, in Illinois, a surge of over a dozen AI-oriented bills has emerged, including HB 4705 and SB 3261, which advocate for public safety and child protection. Other notable proposals include the AI Privacy Act and the AI Safety Measures Act, both designed to bolster accountability and transparency in AI applications.
In Indiana, two bills have garnered attention: HB 1201, which would prevent the use of AI systems to impersonate licensed mental health professionals, and HB 1182, which aims to define and penalize digital sexual image abuse. These initiatives reflect a focused effort to safeguard individuals’ rights and mental well-being in the face of advancing technologies.
Iowa’s SSB 3013 proposes that the outputs generated by AI systems be owned by the individuals who prompted them, raising important questions about intellectual property in the digital age. Meanwhile, Kansas recently passed HB 2183, which modifies existing child exploitation laws to encompass AI-generated or modified images, signaling a legislative response to the evolving landscape of digital content.
In Kentucky, HB 559 has been introduced to establish consumer rights concerning data generated by social media and AI systems. This is complemented by HB 227, which aims to protect minors from the potentially harmful effects of AI companion chatbots. Kentucky’s legislative efforts underscore a commitment to consumer protection amid technological advancements.
Maine is also taking steps, with two newly introduced bills aimed at regulating access to AI chatbots and the provision of mental health services through AI. LD 2162 seeks to mitigate minors’ exposure to AI chatbots with human-like characteristics, while LD 2082 addresses the use of AI in mental health care, emphasizing the need for regulated interactions.
Maryland’s legislative actions include four bills addressing the risks posed by AI deepfakes and other surveillance practices. Notably, HB 184 and SB 8 aim to protect individuals from the harms of AI-generated deceptive content. These legislative measures reflect a broader movement to ensure that emerging technologies do not infringe on individual rights or public safety.
Massachusetts is also advancing several initiatives, including S 243 and S 264, which require consumer notifications when interacting with AI systems that simulate human conversation. Additionally, H 76 focuses on the dissemination of AI-generated deceptive election-related communications, highlighting the importance of transparency in political contexts.
In Michigan, lawmakers are considering two significant bills. HB 4667, categorized as the AI crime bill, has been carried over to the next legislative session. Meanwhile, SB 760 aims to enhance safety measures for children using chatbots, detailing strict regulations to prevent harmful interactions and ensuring appropriate supervision when minors engage with AI products.
As states across the U.S. grapple with the rapid advancement of AI technologies, the legislative landscape continues to evolve. The proposed bills not only seek to address immediate concerns surrounding safety and privacy but also lay the groundwork for a regulatory framework that could shape the future of AI in various sectors. With ongoing developments, stakeholders are closely monitoring these initiatives, which could define the boundaries of AI’s role in society for years to come.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































