A recent survey has revealed a significant disconnect between public expectations and expert assessments regarding the safety of artificial intelligence (AI) in Australia. While 94 percent of Australians believe that AI systems should meet or exceed the safety standards of commercial aviation, experts suggest that the current level of risk associated with AI is alarmingly high. This disparity could threaten the technology adoption that the Australian government deems crucial for economic competitiveness.
Commercial flights boast a remarkably low risk of death at 1 in 30 million per flight, resulting in about 150 fatalities annually. In stark contrast, expert assessments place AI risk at a minimum of 4,000 times higher, with some experts arguing it could be as much as 30,000 times greater. The survey, which included 933 Australians, highlights a growing crisis of trust in AI technologies as Australians grapple with these unsettling figures.
This isn’t just an academic concern. The gap between what Australians expect and what experts assess creates a trust crisis that threatens the technology adoption our government considers essential for economic competitiveness.
The Productivity Commission has argued that AI-specific regulation should be a “last resort,” advocating for the use of existing frameworks like privacy laws and consumer protections. However, this approach faces three critical challenges. First, public trust in technology companies is notably low, with only 23 percent of Australians trusting these corporations to ensure AI safety. When asked about their reluctance to use AI, privacy concerns topped the list at 57 percent, followed by distrust in AI developers at 32 percent. This skepticism poses a significant barrier to the government’s aim of fostering AI adoption without stringent regulations.
Second, Australians overwhelmingly fear under-regulation rather than over-regulation, with 74 percent expressing concern that the government will not regulate AI sufficiently. Only 26 percent worry about excessive regulation. Furthermore, 83 percent believe that current regulations lag behind technological advancements. When respondents were given a choice between prioritizing risk management or driving innovation, 72 percent chose risk management, reflecting a strong public desire for a safer approach.
Lastly, the safety gap in AI is vast. Expert forecasts estimate catastrophic risks from AI between 2 percent and 12 percent by the year 2100, with some researchers and industry leaders estimating these risks could range from 2 percent to 25 percent. Such alarming projections suggest that even the lowest estimates of AI-induced catastrophic risks far exceed what the public is willing to tolerate.
Interestingly, many Australians appear willing to accept delays in AI development if it means enhancing safety. The survey found that 80 percent of respondents would support a 10-year halt in advanced AI development to reduce the risk from 5 percent to 0.5 percent. Even longer delays, such as 50 years, garnered majority support, with half of the participants unwilling to accept even a 1 percent catastrophic risk, even if it promised solutions to pressing global issues like climate change.
So, over-regulation is not a barrier to AI adoption. It’s a lack of trust stemming from a big gap between public safety expectations and current reality.
In contrast to Australia’s cautious approach, other nations are implementing more robust regulatory frameworks. The European Union has enacted technology-specific regulations in 2024, while countries like the UK, USA, and South Korea have established AI Safety Institutes. California has also begun regulating advanced AI models, recognizing that autonomous systems come with unique risks that existing consumer laws fail to address.
Australia is poised to follow suit, having announced plans for an AI Safety Institute that aligns with international efforts. Proposed measures include mandatory safety testing for frontier systems, independent audits, incident reporting, and whistleblower protections. The survey indicates that 90 percent of Australians believe these safeguards would enhance their trust in AI technologies.
Much like aviation, nuclear energy, and pharmaceuticals, which all have specific regulations tailored to their unique risks, Australians expect AI to adhere to similar standards. The path to increased adoption of AI technologies lies not in educational campaigns about the benefits of AI, but in establishing a framework that fosters genuine trust and safety among users. Only by addressing public concerns can AI transition from a perceived threat to a welcomed tool in everyday life.
For further insights, the full report of the Survey Assessing Risks from Artificial Intelligence (SARA) 2025 is available and provides a comprehensive overview of public sentiment on this pressing issue.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































