Connect with us

Hi, what are you looking for?

AI Regulation

UK AI Expert Warns: Rapid Advances May Outpace Safety Measures, Threatening Control

UK AI advisor David Dalrymple warns that rapid advancements may enable machines to outperform humans in critical tasks within five years, risking safety and control.

As artificial intelligence systems advance at an unprecedented pace, a senior U.K. government adviser has raised alarms about the urgent need for public institutions to prepare for the potential safety risks associated with these technologies. David Dalrymple, a program director and AI safety expert at the U.K. government’s Advanced Research and Invention Agency (ARIA), expressed grave concerns over the speed of AI development and the limited time available for governments to respond effectively.

In an interview with The Guardian, Dalrymple stated that the world “may not have time” to implement necessary safety measures before AI systems acquire capabilities that could fundamentally challenge human dominance in critical areas. He described a scenario where these systems could execute “all of the functions that humans perform to get things done in the world, but better,” raising significant fears about humanity’s ability to maintain control over “our civilisation, society, and planet.” His comments underscore the ongoing struggle among governments worldwide to balance the rapid pace of private-sector innovation with the slower tempo of regulation and public understanding.

Dalrymple noted a widening gap in understanding between policymakers and companies developing advanced AI technologies. He emphasized that the velocity of progress in leading AI labs is often misaligned with the comprehension of those responsible for regulatory oversight. “Things are moving really fast,” he said, warning that from a safety perspective, society may not keep pace with technological advancements. He projected that within five years, it is feasible for most economically valuable tasks to be performed by machines at a higher quality and lower cost than humans.

This projection brings forth critical questions regarding potential economic disruptions, institutional preparedness, and the adaptability of governments before changes become irreversible. Dalrymple also cautioned against the assumption that advanced AI systems will be reliable merely due to their power. He pointed to economic pressures that could lead to hasty deployments before the necessary science to guarantee reliability has fully matured.

ARIA, where Dalrymple serves, is tasked with funding high-risk research while operating independently despite being publicly financed. The agency focuses on ensuring that AI is used safely in vital sectors, including energy infrastructure. While robust scientific assurances of safety may not arrive in time, Dalrymple suggested that controlling and mitigating the risks might be the most pragmatic near-term option.

“The next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” he stated.

These concerns align with recent findings from the U.K.’s AI Security Institute, which reported that AI capabilities are advancing at extraordinary rates, with performance improvements doubling approximately every eight months. Tests conducted by the institute indicated that advanced models can now complete apprentice-level tasks in significantly less time, and some systems demonstrated the ability to autonomously perform tasks that would take a human expert much longer. Notably, in evaluations focused on self-replication—a paramount safety concern—two leading models achieved success rates exceeding 60 percent.

Despite these advancements, the institute maintained that worst-case scenarios remain unlikely in routine conditions. Nonetheless, Dalrymple underscored that when the pace of technological progress outstrips the development of safety measures, the resulting risks could have dire implications for national security and the global economy. “Human civilisation,” he warned, is “sleepwalking into this transition,” even as those at the frontier of AI hope that the ensuing disruption will ultimately yield beneficial outcomes.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

UK AI safety expert David Dalrymple warns that within five years, machines may outperform humans in most valuable tasks, threatening societal control and stability.

Top Stories

Google's AI misguides users with dangerous medical advice, revealing alarming contradictions that could increase mortality risk for patients, warns The Guardian.

AI Technology

Yoshua Bengio warns that advanced AI models exhibit signs of self-preservation, urging strict regulatory measures to prevent potential existential risks.

Top Stories

Law professors Doerfler and Moyn propose disempowering federal courts to enhance progressive agendas, sparking a debate on the potential risks of undermining judicial independence.

AI Technology

AI research faces a quality crisis as submissions to NeurIPS surge to 21,500 in 2023, prompting concerns over integrity and meaningful contributions

Top Stories

Over 1,000 Amazon employees warn the company’s $150 billion AI push threatens jobs, democracy, and the environment, calling for urgent ethical reforms.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.