As states across the U.S. grapple with the rapid advancement of artificial intelligence (AI), several legislatures are advancing bills aimed at regulating its influence on society, particularly concerning minors and privacy. From Georgia to Hawaii, lawmakers are assessing the implications of generative AI technologies and chatbot interactions, prompting a diverse array of legislative measures designed to provide safeguards and accountability.
In Georgia, a series of significant bills are under consideration. Notably, **SB 488** seeks to classify generative AI systems as personal property for product liability purposes, particularly in cases involving injuries to minors. This bill, which was read for the second time in the Senate on March 4, aims to establish clearer liability frameworks for manufacturers and product sellers. Another bill, **SB 540**, addresses chatbot disclosures and child safety, mandating that AI systems notify users of their nature and implement protocols for handling suicidal ideation among minors. This measure was read for the second time in the Senate on February 26. Additionally, **HB 171** attempts to prohibit the distribution of computer-generated child sexual abuse material (CSAM) and has been recommitted to the Senate as of January 12.
Meanwhile, in Hawaii, several AI-focused bills are progressing through legislative channels. **HB 1782** aims to establish protective measures for minors interacting with AI companion systems, having received approval from the Consumer Protection & Commerce Committee on February 19. This bill is currently advancing through judiciary committees. **SB 3001**, which requires AI operators to disclose certain information to users and provide safeguards against harmful content, was cleared by the Judiciary and Hawaiian Affairs Committee on March 4. Another notable measure, **HB 2137**, addresses the use of AI deepfakes, prohibiting their harmful applications while requiring disclosure of synthetic performers in advertising, and has passed out of committee on March 3.
In Idaho, **SB 1227**, which sets provisions around the use of generative AI in public education, has gained traction, having been approved by the Senate on February 2 and is now with the House Education Committee. Similarly, **HB 727** updates the definition of video voyeurism to include synthetic media, passing the House unanimously on March 3.
Illinois is witnessing an expansive legislative response to AI technology, with over a dozen bills currently active. Among them, **HB 5044** introduces the Chatbot Provider Liability Act, assigning strict liability to chatbot providers for any harm caused to users. This bill is awaiting review by the Judiciary Committee. Another significant initiative, **HB 4705 and SB 3261**, aims to create the AI Public Safety and Child Protection Transparency Act, which was sent to the House Judiciary Committee on March 4.
In Indiana, **HB 1201** seeks to prohibit AI systems from impersonating licensed mental health professionals, while **HB 1182** aims to define and criminalize digital sexual image abuse. Both proposals are currently under review by their respective committees. Iowa’s **SSB 3013** proposes that outputs generated by AI systems are owned by the individual who prompts the system, addressing ownership rights in the evolving digital landscape.
Kansas is also advancing relevant legislation, with **HB 2518** amending breach of privacy laws to encompass AI-generated content and **HB 2594** modifying blackmail laws to include threats involving AI imagery. Both measures passed the House unanimously and are scheduled for hearings in the Senate on March 11.
As states like Kentucky and Louisiana join the fray, with bills addressing consumer rights and health care AI use, the legislative landscape reflects an urgent response to the complexities introduced by AI technologies. **HB 559** in Kentucky seeks to establish consumer rights concerning AI and social media data, while **HB 197** in Louisiana focuses on the implications of AI for health care providers.
With the introduction of these varied measures, the challenge remains for lawmakers to navigate the fine line between fostering innovation in AI and ensuring public safety and ethical standards. As such bills progress, they underscore a growing recognition of the need for comprehensive regulatory frameworks addressing the myriad implications of AI technology in everyday life.


















































