A new policy framework from the Aspen Policy Academy is urging state officials to establish formal systems to investigate incidents where artificial intelligence tools make mistakes or cause harm. Published last month, the guide proposes a standardized incident investigation framework specifically designed for Utah’s Office of Artificial Intelligence Policy, one of the few statewide agencies overseeing AI regulatory sandboxes in the nation.
Regulatory sandboxes allow states to test technologies under the close supervision of regulators who ensure compliance with legal and policy standards. Utah’s Regulatory Relief program aims to provide compliance exemptions for AI companies whose tools may yield future benefits for the state. However, the guide argues that the agency currently lacks clear processes for responding to incidents of biased decision-making or unsafe recommendations, failures that can have financial, physical, or societal repercussions and ultimately erode public trust.
“Trust is not a milestone that you hit; it’s something that you earn and you maintain,” said Aspen Policy Academy fellow Michelle Sipics, the report’s author, in an interview. “Both regulators and members of the public watch what you do when something goes wrong.” As state governments increasingly adopt generative AI tools, officials are contending with real-world risks, including algorithmic discrimination in hiring, housing, and government services. Colorado lawmakers are still debating legislative changes to the state’s landmark 2024 AI law, particularly regarding accountability for developers and deployers in case of failures.
The proposed framework would create a structured investigative process involving government officials, developers, and industry experts to address “GenAI incidents,” which are cases when AI systems cause direct harm through their development, deployment, or outputs. Sipics noted that she modeled the framework after safety practices in aviation and healthcare, which emphasize root-cause analysis and prevention rather than mere enforcement.
“Safety has continued to improve over the decades, and one of the reasons for that is the dedication to investigating incidents,” Sipics explained. “From those investigations, the industry feeds what they learn back into everything, from how they train pilots, how they train air traffic control, designing aircraft maintenance operations, everything. I feel like GenAI needs that same discipline.”
The recommendations build on Utah’s broader initiative to establish itself as a national leader in AI governance. A prior collaboration by the Aspen Policy Academy outlined evaluation standards focused on transparency, accountability, and public trust—central pillars of the Office of AI Policy’s strategy, according to its website. The new framework also requires companies participating in Utah’s sandbox to pledge to publicly share investigation findings, akin to incident reports by the National Transportation Safety Board that oversees the aviation industry. Sipics believes this transparency is vital for maintaining public trust in companies and government agencies as they innovate with AI technologies.
“People are not using this technology in a vacuum. It exists in the world. It exists for people,” Sipics asserted. “Everybody should be able to learn the lessons learned as we go along, so that we can improve safety for everyone.”
The guide positions incident investigation as the next phase of AI governance, potentially helping states transition from reactive regulation to a model of continuous learning. This could provide a framework for federal policymakers seeking more consistent oversight of AI technologies. However, Sipics cautioned that achieving this ideal is still a significant distance away.
“Realistically, I think transparency is probably the best path to scale because best practices like this build in a community,” she stated. “When people see you being responsible and sharing what you’ve learned and continuously improving the safety of your products, that has value; that gets buy-in.”
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































