A bill currently making its way through the Tennessee legislature aims to impose new safety requirements on large artificial intelligence (AI) companies, following adjustments made after consultations with the White House. The proposed Artificial Intelligence Public Safety and Child Protection Transparency Act primarily seeks to regulate major players in the AI field, mandating increased transparency regarding risks, particularly concerning public safety and child welfare.
The legislation would compel AI companies operating advanced systems to create and publish comprehensive safety plans addressing potential “catastrophic risks” and strategies for mitigating these threats. The bill was presented to the state Senate last Wednesday and has since been forwarded to the Commerce and Labor Committee for further evaluation.
One significant aspect of the bill requires AI companies providing tools to minors to establish publicly accessible child safety protection policies. These policies must explicitly outline measures to protect children from harmful interactions associated with AI, including risks of physical harm and emotional distress.
The timing of Tennessee’s legislation coincides with a broader debate over state-level AI regulation amid the second Trump administration. Tensions have emerged between the White House, which advocates against a fragmented regulatory landscape, and state leaders alongside advocacy groups that argue for the necessity of local laws in the absence of federal action. The Trump administration has proposed moratoriums on state AI laws, citing concerns that such regulations could hinder business innovation. In contrast, state officials and advocates have described these moratoriums as potentially harmful to citizen safety and well-being.
Under the Trump administration’s AI Action Plan and other recent directives, some space has been carved out for state regulations aimed at protecting children. During a recent presentation to the Tennessee AI Advisory Council, Andrew Doris, a senior policy analyst at the nonprofit Secure AI Project, noted that the bill aligns with these carve-outs by focusing specifically on child safety and transparency, rather than regulating the methodologies used to develop AI models.
The amendments made to the bill have considerably narrowed its scope. It now specifically targets large, high-impact AI companies, defined as “frontier developers” with revenues exceeding $500 million. Additionally, a “covered chatbot” must generate at least $25 million annually and potentially be used by minors, with a threshold of at least 1 million monthly users. Certain chatbots, particularly those utilized in video games or customer service, have been explicitly excluded from the bill’s requirements.
Another major revision emphasizes a more precise definition of “catastrophic risk,” focusing on extreme, high-consequence harms while adjusting transparency regulations to necessitate public summaries of safety practices instead of comprehensive internal disclosures. Furthermore, protections for minors interacting with AI systems have been reinforced, and exemptions have been granted for systems used in academic research.
Doris emphasized that the bill also serves as a potential bridge to future federal action. It includes provisions that would allow the Tennessee Department of Safety, along with the attorney general and other relevant officials, to recognize compliance with a comparable federal standard for safety incident reporting as sufficient for fulfilling Tennessee’s requirements if Congress passes such standards.
This legislative development in Tennessee reflects a growing concern regarding the implications of AI technologies on vulnerable populations, particularly children. As discussions on AI regulation continue to evolve at both state and federal levels, Tennessee’s efforts could serve as a crucial case study in balancing innovation with safety. The path ahead remains uncertain, yet the outcomes of this bill could influence similar initiatives across the nation.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































