Salt Lake City, Utah – January 27, 2026 – Utah lawmakers are making strides in the realm of technology regulation, with a new focus on advanced artificial intelligence systems and public safety. Representative Doug Fiefia has introduced H.B. 286, the Artificial Intelligence Transparency Act, during the 2026 General Legislative Session. This legislation aims to mandate that major AI companies operating in the state publicly disclose their strategies for assessing and mitigating serious risks associated with their systems, particularly regarding potential harms to children.
The proposed bill builds on Utah’s recent initiatives around social media regulation, illustrating the state’s intention to take a prominent role in the governance of powerful digital technologies, even as federal lawmakers struggle to reach consensus on national AI regulations. Representative Fiefia, a first-generation American and Republican representing Utah House District 48, brings a wealth of experience in technology and public service, having worked at firms such as Google and Domo. He has gained recognition as a leading voice in state AI policy, co-chairing a national task force on state AI governance while advocating for family, education, and responsible tech in his legislative agenda.
Under the provisions of H.B. 286, designated AI companies would be obligated to establish and publish public safety and child protection plans outlining how they evaluate and mitigate severe AI-related risks. The bill stipulates that these companies adhere to their disclosed plans in practice, report significant AI safety incidents, and refrain from retaliating against employees who raise internal concerns or disclose failures. Supporters of the bill contend that it addresses a regulatory gap, as many AI companies have voluntarily adopted internal safety frameworks without the requirement to document or disclose those efforts publicly.
The legislation is designed to promote transparency without creating a new regulatory agency or imposing prescriptive technical standards. Instead, it aims to compel companies to articulate publicly how they manage safety risks as AI systems evolve and proliferate. Utah has positioned itself as a national leader in tech regulation concerning child safety, particularly in relation to social media, and H.B. 286 seeks to extend this approach to the rapidly advancing field of AI.
Advocacy groups supporting the bill have expressed optimism that increased transparency can effectively change corporate behavior. “We applaud Rep. Fiefia for having the foresight to introduce this critical legislation,” stated Andrew Doris, senior policy analyst at the Secure AI Project, a San Francisco-based organization that contributed to shaping the bill. “It takes courage to take on big tech companies, but major risks from AI are already here and the time to act is now.”
Adam Billen, vice president of public policy at Encode AI, echoed this sentiment, emphasizing lessons learned from the regulation of social media. “We’ve learned from social media that we can’t trust tech companies to voluntarily choose to protect our families,” he said. “With families already suffering immense AI-driven tragedies, we must take steps today to protect our children.”
The announcement of H.B. 286 coincides with a statewide survey revealing significant public concern regarding AI oversight. The survey indicates that 90% of Utah voters support requiring AI developers to implement safety and security protocols aimed at protecting children, while 71% express worries that the state may not sufficiently regulate AI.
H.B. 286 delineates clear boundaries regarding its scope and enforcement. It applies solely to “large frontier developers,” defined as companies that have trained advanced AI models with at least 10²⁶ computational operations and reported a minimum of $500 million in annual revenue in the prior year. This threshold limits the bill’s coverage to a select group of leading AI model developers. The legislation allows companies the flexibility to adapt to rapidly changing technology yet raises questions about whether public disclosure alone can keep pace with increasingly capable AI systems, especially when failures may only become apparent after harm has occurred.
Enforcement of the bill would fall under the jurisdiction of the Utah attorney general, who would be empowered to bring civil actions against companies in violation of the law. The proposed penalties include up to $1 million for a first violation and $3 million for subsequent violations. Reported AI safety incidents would be assessed by the state’s Office of AI Policy, avoiding the establishment of a new regulatory agency. However, whether these penalties and review mechanisms will effectively compel behavior change among some of the world’s largest AI companies remains a pivotal question as the bill progresses.
H.B. 286 is set for its first significant evaluation today, January 27, by the House Economic Development and Workforce Services Standing Committee. This bill is one of several measures on the agenda, and the meeting will be livestreamed on the Utah Legislature’s website. Should the bill advance out of committee, it will move closer to a full House vote and facilitate a more meaningful debate on the extent to which states should hold AI developers publicly accountable for safety risks.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































