Connect with us

Hi, what are you looking for?

Top Stories

Urgent AI Safety Risks Emerge as Breakthroughs Accelerate, Warns UK Research Director

UK AI safety expert David Dalrymple warns that within five years, machines may outperform humans in most valuable tasks, threatening societal control and stability.

The world “may not have time” to prepare for the safety risks posed by cutting-edge AI systems, warns David Dalrymple, a programme director and AI safety expert at the UK’s Aria agency. Speaking to the Guardian, Dalrymple emphasized the urgency of addressing the growing capabilities of advanced technology.

“I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he stated, cautioning that humanity could be outpaced in critical domains necessary for maintaining control over society and the planet.

Dalrymple highlighted a significant gap in understanding between the public sector and AI companies regarding the potential breakthroughs in this technology. “I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective,” he said. He predicted that within five years, most economically valuable tasks could be executed by machines at a higher quality and lower cost than humans.

Governments, according to Dalrymple, should not take the reliability of advanced AI systems for granted. As Aria is publicly funded but operates independently, it directs research funding toward ensuring the safe use of AI in critical infrastructure, such as energy networks. “We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure,” he added. Instead, he advocates for controlling and mitigating potential downsides.

Describing the consequences of technological advancements outpacing safety measures, Dalrymple warned of a “destabilisation of security and economy.” He argued for more technical efforts to understand and manage the behaviors of advanced AI systems. “Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better but it’s very high risk and human civilisation is on the whole sleepwalking into this transition,” he cautioned.

The urgency of Dalrymple’s warnings is underscored by recent findings from the UK government’s AI Security Institute (AISI), which reported that the capabilities of advanced AI models are “improving rapidly” across all domains, with performance in some areas doubling approximately every eight months. Leading models can now complete apprentice-level tasks 50% of the time on average, a significant increase from about 10% last year. The most advanced systems can even autonomously complete tasks that would take a human expert over an hour.

AISI also investigated advanced models for self-replication, a critical safety concern that involves systems potentially spreading copies of themselves to other devices, making them harder to control. The tests indicated that two cutting-edge models achieved success rates exceeding 60% in self-replicating scenarios. However, AISI reassured that a worst-case scenario is unlikely in everyday environments, asserting that attempts at self-replication are “unlikely to succeed in real-world conditions.”

Looking ahead, Dalrymple believes that by late 2026, AI systems will be capable of automating the equivalent of a full day of research and development work, leading to a further acceleration of capabilities. This advancement could allow AI technology to improve itself significantly, particularly in the mathematical and computational aspects of AI development.

As the global landscape continues to evolve rapidly, the challenges posed by advanced AI systems underscore the necessity for governments and industry players to address safety concerns proactively. The future of AI could significantly shape not only economic dynamics but also societal structures, requiring a coordinated response to ensure that technology serves humanity rather than outpaces it.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

David R. Spigel of Sarah Cannon Research Institute highlights AI's potential to personalize cancer treatment, improving outcomes by analyzing patient data for tailored therapies.

AI Technology

Researchers advocate for hybrid quantum systems to enhance AI performance, aiming to overcome computational limitations in industries like finance and logistics.

AI Cybersecurity

DTP Group warns that AI-driven cyber attacks in the UK surged in 2025, resulting in £1.9 billion in losses and crippling service disruptions across...

Top Stories

Invest in Meta Platforms for its robust advertising model and $44 billion cash reserves, while avoiding overvalued Palantir, trading at a P/S ratio of...

AI Generative

Researchers evaluate GPT models' definition accuracy using cosine similarity metrics, revealing significant improvements in contextual relevance and coherence.

AI Regulation

Arhasi unveils the R.A.P.I.D. framework, integrating automation and governance to accelerate AI scalability while ensuring trust and compliance across enterprises.

AI Cybersecurity

U.S. AI cybersecurity firms are rolling out scalable solutions for small businesses by 2026, enabling enhanced protection against rising cyber threats at lower costs.

AI Finance

Palantir Technologies' stock plummeted 8.9% as investor concerns mount over insider selling and high valuation amid soaring AI contract growth.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.