As artificial intelligence systems advance at an unprecedented pace, a senior U.K. government adviser has raised alarms about the urgent need for public institutions to prepare for the potential safety risks associated with these technologies. David Dalrymple, a program director and AI safety expert at the U.K. government’s Advanced Research and Invention Agency (ARIA), expressed grave concerns over the speed of AI development and the limited time available for governments to respond effectively.
In an interview with The Guardian, Dalrymple stated that the world “may not have time” to implement necessary safety measures before AI systems acquire capabilities that could fundamentally challenge human dominance in critical areas. He described a scenario where these systems could execute “all of the functions that humans perform to get things done in the world, but better,” raising significant fears about humanity’s ability to maintain control over “our civilisation, society, and planet.” His comments underscore the ongoing struggle among governments worldwide to balance the rapid pace of private-sector innovation with the slower tempo of regulation and public understanding.
Dalrymple noted a widening gap in understanding between policymakers and companies developing advanced AI technologies. He emphasized that the velocity of progress in leading AI labs is often misaligned with the comprehension of those responsible for regulatory oversight. “Things are moving really fast,” he said, warning that from a safety perspective, society may not keep pace with technological advancements. He projected that within five years, it is feasible for most economically valuable tasks to be performed by machines at a higher quality and lower cost than humans.
This projection brings forth critical questions regarding potential economic disruptions, institutional preparedness, and the adaptability of governments before changes become irreversible. Dalrymple also cautioned against the assumption that advanced AI systems will be reliable merely due to their power. He pointed to economic pressures that could lead to hasty deployments before the necessary science to guarantee reliability has fully matured.
ARIA, where Dalrymple serves, is tasked with funding high-risk research while operating independently despite being publicly financed. The agency focuses on ensuring that AI is used safely in vital sectors, including energy infrastructure. While robust scientific assurances of safety may not arrive in time, Dalrymple suggested that controlling and mitigating the risks might be the most pragmatic near-term option.
“The next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” he stated.
These concerns align with recent findings from the U.K.’s AI Security Institute, which reported that AI capabilities are advancing at extraordinary rates, with performance improvements doubling approximately every eight months. Tests conducted by the institute indicated that advanced models can now complete apprentice-level tasks in significantly less time, and some systems demonstrated the ability to autonomously perform tasks that would take a human expert much longer. Notably, in evaluations focused on self-replication—a paramount safety concern—two leading models achieved success rates exceeding 60 percent.
Despite these advancements, the institute maintained that worst-case scenarios remain unlikely in routine conditions. Nonetheless, Dalrymple underscored that when the pace of technological progress outstrips the development of safety measures, the resulting risks could have dire implications for national security and the global economy. “Human civilisation,” he warned, is “sleepwalking into this transition,” even as those at the frontier of AI hope that the ensuing disruption will ultimately yield beneficial outcomes.
See also
India Prepares for 2026 AI Impact Summit to Drive Global South Collaboration and Growth
Health IT Firms Call for Clearer AI Regulations Amid Rapid Development Changes
AI Experts Predict 2026: 77% Doubt AGI Will Be Achieved, Law Schools Lag in Tech Training
AI Governance at a Crossroads: Global Cooperation Essential Amid Fragmentation Risks
Ireland Emerges as Top AI Investment Hub with 800 New Jobs from IBM and $202.5M Workday Boost


















































