Connect with us

Hi, what are you looking for?

AI Regulation

Australians Demand Airline-Level AI Safety Amidst 4,000x Risk Gap, Survey Reveals

Survey reveals 94% of Australians demand AI safety standards matching commercial aviation, despite experts assessing AI risks up to 30,000 times higher.

A recent survey has revealed a significant disconnect between public expectations and expert assessments regarding the safety of artificial intelligence (AI) in Australia. While 94 percent of Australians believe that AI systems should meet or exceed the safety standards of commercial aviation, experts suggest that the current level of risk associated with AI is alarmingly high. This disparity could threaten the technology adoption that the Australian government deems crucial for economic competitiveness.

Commercial flights boast a remarkably low risk of death at 1 in 30 million per flight, resulting in about 150 fatalities annually. In stark contrast, expert assessments place AI risk at a minimum of 4,000 times higher, with some experts arguing it could be as much as 30,000 times greater. The survey, which included 933 Australians, highlights a growing crisis of trust in AI technologies as Australians grapple with these unsettling figures.

This isn’t just an academic concern. The gap between what Australians expect and what experts assess creates a trust crisis that threatens the technology adoption our government considers essential for economic competitiveness.

The Productivity Commission has argued that AI-specific regulation should be a “last resort,” advocating for the use of existing frameworks like privacy laws and consumer protections. However, this approach faces three critical challenges. First, public trust in technology companies is notably low, with only 23 percent of Australians trusting these corporations to ensure AI safety. When asked about their reluctance to use AI, privacy concerns topped the list at 57 percent, followed by distrust in AI developers at 32 percent. This skepticism poses a significant barrier to the government’s aim of fostering AI adoption without stringent regulations.

Second, Australians overwhelmingly fear under-regulation rather than over-regulation, with 74 percent expressing concern that the government will not regulate AI sufficiently. Only 26 percent worry about excessive regulation. Furthermore, 83 percent believe that current regulations lag behind technological advancements. When respondents were given a choice between prioritizing risk management or driving innovation, 72 percent chose risk management, reflecting a strong public desire for a safer approach.

Lastly, the safety gap in AI is vast. Expert forecasts estimate catastrophic risks from AI between 2 percent and 12 percent by the year 2100, with some researchers and industry leaders estimating these risks could range from 2 percent to 25 percent. Such alarming projections suggest that even the lowest estimates of AI-induced catastrophic risks far exceed what the public is willing to tolerate.

Interestingly, many Australians appear willing to accept delays in AI development if it means enhancing safety. The survey found that 80 percent of respondents would support a 10-year halt in advanced AI development to reduce the risk from 5 percent to 0.5 percent. Even longer delays, such as 50 years, garnered majority support, with half of the participants unwilling to accept even a 1 percent catastrophic risk, even if it promised solutions to pressing global issues like climate change.

So, over-regulation is not a barrier to AI adoption. It’s a lack of trust stemming from a big gap between public safety expectations and current reality.

In contrast to Australia’s cautious approach, other nations are implementing more robust regulatory frameworks. The European Union has enacted technology-specific regulations in 2024, while countries like the UK, USA, and South Korea have established AI Safety Institutes. California has also begun regulating advanced AI models, recognizing that autonomous systems come with unique risks that existing consumer laws fail to address.

Australia is poised to follow suit, having announced plans for an AI Safety Institute that aligns with international efforts. Proposed measures include mandatory safety testing for frontier systems, independent audits, incident reporting, and whistleblower protections. The survey indicates that 90 percent of Australians believe these safeguards would enhance their trust in AI technologies.

Much like aviation, nuclear energy, and pharmaceuticals, which all have specific regulations tailored to their unique risks, Australians expect AI to adhere to similar standards. The path to increased adoption of AI technologies lies not in educational campaigns about the benefits of AI, but in establishing a framework that fosters genuine trust and safety among users. Only by addressing public concerns can AI transition from a perceived threat to a welcomed tool in everyday life.

For further insights, the full report of the Survey Assessing Risks from Artificial Intelligence (SARA) 2025 is available and provides a comprehensive overview of public sentiment on this pressing issue.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

U.S. policymakers must adopt a flexible AI strategy to maintain leadership amid fierce competition from China, emphasizing innovation while safeguarding democratic values.

Top Stories

Artificial intelligence is revolutionizing self-storage operations by enhancing efficiency and customer trust, empowering managers to optimize workflows while maintaining ethical standards.

AI Tools

AI transforms careers in 2023 by automating tasks, creating diverse roles in machine learning and ethics, while 70% of workers adapt through continuous learning.

Top Stories

SoundHound AI's sales surged 68% to $42 million in Q3 2025, positioning it as a more attractive investment than Navitas Semiconductor amid industry challenges.

Top Stories

Anthropic CEO Dario Amodei warns that by 2027, a "country of geniuses" powered by 50 million advanced AI entities could pose unprecedented threats to...

AI Business

CIOs forecast AI's rising complexity and governance needs by 2026, predicting IT budgets to rise as organizations prioritize system integrity over speed.

AI Technology

NHS England launches a trial combining AI and robotics to improve lung cancer detection, aiming for 50,000 diagnoses by 2035 and saving thousands of...

Top Stories

Google's DeepMind unveils an AI system that accelerates drug discovery, reducing development timelines from over a decade to mere weeks by autonomously proposing novel...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.