The world “may not have time” to prepare for the safety risks posed by cutting-edge AI systems, warns David Dalrymple, a programme director and AI safety expert at the UK’s Aria agency. Speaking to the Guardian, Dalrymple emphasized the urgency of addressing the growing capabilities of advanced technology.
“I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he stated, cautioning that humanity could be outpaced in critical domains necessary for maintaining control over society and the planet.
Dalrymple highlighted a significant gap in understanding between the public sector and AI companies regarding the potential breakthroughs in this technology. “I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective,” he said. He predicted that within five years, most economically valuable tasks could be executed by machines at a higher quality and lower cost than humans.
Governments, according to Dalrymple, should not take the reliability of advanced AI systems for granted. As Aria is publicly funded but operates independently, it directs research funding toward ensuring the safe use of AI in critical infrastructure, such as energy networks. “We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure,” he added. Instead, he advocates for controlling and mitigating potential downsides.
Describing the consequences of technological advancements outpacing safety measures, Dalrymple warned of a “destabilisation of security and economy.” He argued for more technical efforts to understand and manage the behaviors of advanced AI systems. “Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better but it’s very high risk and human civilisation is on the whole sleepwalking into this transition,” he cautioned.
The urgency of Dalrymple’s warnings is underscored by recent findings from the UK government’s AI Security Institute (AISI), which reported that the capabilities of advanced AI models are “improving rapidly” across all domains, with performance in some areas doubling approximately every eight months. Leading models can now complete apprentice-level tasks 50% of the time on average, a significant increase from about 10% last year. The most advanced systems can even autonomously complete tasks that would take a human expert over an hour.
AISI also investigated advanced models for self-replication, a critical safety concern that involves systems potentially spreading copies of themselves to other devices, making them harder to control. The tests indicated that two cutting-edge models achieved success rates exceeding 60% in self-replicating scenarios. However, AISI reassured that a worst-case scenario is unlikely in everyday environments, asserting that attempts at self-replication are “unlikely to succeed in real-world conditions.”
Looking ahead, Dalrymple believes that by late 2026, AI systems will be capable of automating the equivalent of a full day of research and development work, leading to a further acceleration of capabilities. This advancement could allow AI technology to improve itself significantly, particularly in the mathematical and computational aspects of AI development.
As the global landscape continues to evolve rapidly, the challenges posed by advanced AI systems underscore the necessity for governments and industry players to address safety concerns proactively. The future of AI could significantly shape not only economic dynamics but also societal structures, requiring a coordinated response to ensure that technology serves humanity rather than outpaces it.
See also
Jayant Chaudhary Urges SPA Graduates to Fuse AI, Ethics, and Creativity for Nation-Building
Amazon Reveals Key Ad Strategies at CES 2026, Prioritizing AI and Live Sports Integration
Illinois Passes Laws to Protect Immigrant Students and Regulate AI Use in Education
OpenAI Launches Academy to Empower Newsrooms with Practical AI Training and Resources
Nvidia Evaluates Stock Valuation Ahead of CES 2026, Targeting $235 Fair Value



















































