Connect with us

Hi, what are you looking for?

AI Regulation

Aspen Policy Academy Proposes AI Incident Reporting Framework Based on Aviation Safety Standards

Aspen Policy Academy unveils a new AI incident investigation framework for Utah, modeled on aviation safety standards, to enhance accountability and public trust.

A new policy framework from the Aspen Policy Academy is urging state officials to establish formal systems to investigate incidents where artificial intelligence tools make mistakes or cause harm. Published last month, the guide proposes a standardized incident investigation framework specifically designed for Utah’s Office of Artificial Intelligence Policy, one of the few statewide agencies overseeing AI regulatory sandboxes in the nation.

Regulatory sandboxes allow states to test technologies under the close supervision of regulators who ensure compliance with legal and policy standards. Utah’s Regulatory Relief program aims to provide compliance exemptions for AI companies whose tools may yield future benefits for the state. However, the guide argues that the agency currently lacks clear processes for responding to incidents of biased decision-making or unsafe recommendations, failures that can have financial, physical, or societal repercussions and ultimately erode public trust.

“Trust is not a milestone that you hit; it’s something that you earn and you maintain,” said Aspen Policy Academy fellow Michelle Sipics, the report’s author, in an interview. “Both regulators and members of the public watch what you do when something goes wrong.” As state governments increasingly adopt generative AI tools, officials are contending with real-world risks, including algorithmic discrimination in hiring, housing, and government services. Colorado lawmakers are still debating legislative changes to the state’s landmark 2024 AI law, particularly regarding accountability for developers and deployers in case of failures.

The proposed framework would create a structured investigative process involving government officials, developers, and industry experts to address “GenAI incidents,” which are cases when AI systems cause direct harm through their development, deployment, or outputs. Sipics noted that she modeled the framework after safety practices in aviation and healthcare, which emphasize root-cause analysis and prevention rather than mere enforcement.

“Safety has continued to improve over the decades, and one of the reasons for that is the dedication to investigating incidents,” Sipics explained. “From those investigations, the industry feeds what they learn back into everything, from how they train pilots, how they train air traffic control, designing aircraft maintenance operations, everything. I feel like GenAI needs that same discipline.”

The recommendations build on Utah’s broader initiative to establish itself as a national leader in AI governance. A prior collaboration by the Aspen Policy Academy outlined evaluation standards focused on transparency, accountability, and public trust—central pillars of the Office of AI Policy’s strategy, according to its website. The new framework also requires companies participating in Utah’s sandbox to pledge to publicly share investigation findings, akin to incident reports by the National Transportation Safety Board that oversees the aviation industry. Sipics believes this transparency is vital for maintaining public trust in companies and government agencies as they innovate with AI technologies.

“People are not using this technology in a vacuum. It exists in the world. It exists for people,” Sipics asserted. “Everybody should be able to learn the lessons learned as we go along, so that we can improve safety for everyone.”

The guide positions incident investigation as the next phase of AI governance, potentially helping states transition from reactive regulation to a model of continuous learning. This could provide a framework for federal policymakers seeking more consistent oversight of AI technologies. However, Sipics cautioned that achieving this ideal is still a significant distance away.

“Realistically, I think transparency is probably the best path to scale because best practices like this build in a community,” she stated. “When people see you being responsible and sharing what you’ve learned and continuously improving the safety of your products, that has value; that gets buy-in.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

BigScoots strengthens online security with a new human verification system, addressing evolving cyber threats and enhancing user protection against bots.

AI Regulation

OpenAI proposes a four-day workweek and a "right to AI," urging employers to pilot 32-hour schedules while advocating for broader access and worker benefits.

Top Stories

Google unveils AI Edge Eloquent, an offline dictation app that polishes speech in real-time, enhancing privacy and productivity for iOS users.

AI Tools

AI enhances monitoring of fragile transitional water ecosystems, leveraging machine learning in 96 studies to improve predictive accuracy and address critical environmental challenges.

AI Regulation

OpenAI proposes a public wealth fund and a four-day workweek to combat AI-driven job displacement, urging policymakers to act urgently on these transformative reforms.

AI Government

California Governor Gavin Newsom's executive order mandates AI transparency in government contracts, aiming to prevent misuse and protect civil rights in the state's $100...

AI Technology

Researchers at the University of South China and Purdue University developed a new rust-resistant steel with 1,730 MPa strength and 15.5% ductility using AI,...

AI Technology

AI revolutionizes trading by processing live market data in real-time, enhancing decision-making and minimizing risks, especially in volatile cryptocurrency markets.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.