Connect with us

Hi, what are you looking for?

Top Stories

DeepMind’s Shane Legg Predicts 50% Chance of Minimal AGI by 2028

DeepMind co-founder Shane Legg forecasts a 50% chance of achieving minimal AGI by 2028, potentially transforming industries and workforce dynamics.

Shane Legg, a co-founder of DeepMind, has set the likelihood of achieving “minimal AGI” at 50 percent by 2028. In a recent interview with Hannah Fry, Legg outlined his conceptual framework for artificial general intelligence (AGI), distinguishing between minimal AGI, full AGI, and artificial superintelligence (ASI). He defines minimal AGI as an artificial agent capable of performing cognitive tasks typically managed by most humans, while full AGI encompasses the entirety of human cognitive abilities, including extraordinary achievements such as formulating new scientific theories and composing symphonies.

Legg’s predictions suggest that minimal AGI could be realized within the next two years, with the broader full AGI potentially emerging three to six years thereafter. To assess progress toward these milestones, he proposes a rigorous testing methodology. This would involve an AI system passing all standard human cognitive tasks, alongside exhaustive evaluations by human teams searching for any weaknesses over an extended period, with unrestricted access to the system’s inner workings.

The timeline Legg envisions aligns with a growing interest in AGI among both researchers and technology companies. As advancements in machine learning and neural networks accelerate, the debate surrounding the implications of AGI intensifies. Companies increasingly recognize the potential of AGI to revolutionize various sectors, from healthcare to finance, by automating complex decision-making processes that currently require human intervention.

Legg’s insights reflect an optimistic yet cautious stance on the future of AI. While he acknowledges the advancements made in recent years, he emphasizes that achieving full AGI involves navigating significant technical challenges. The scale he proposes serves as a guide for researchers to comprehensively evaluate AI systems’ cognitive capabilities. His assertion that minimal AGI could be reached in just a couple of years contrasts with the more conservative predictions from other experts in the field, who often point to the multifaceted hurdles still to be overcome.

The conversation surrounding AGI is not just academic; it has profound implications for society. The transition from minimal AGI to full AGI could fundamentally alter the workforce, raising questions about job displacement and the ethical considerations of deploying such powerful technologies. As companies invest heavily in AI research, the societal impacts of AGI development will require careful consideration and proactive regulatory frameworks to ensure responsible use.

As the timeline for achieving minimal AGI draws closer, the global implications of such advancements remain a focal point for industry stakeholders. The acceleration of AI technologies, coupled with Legg’s projections, suggests an imminent transformation in how we understand intelligence itself. The potential for AI systems to rival human cognitive abilities raises significant questions about our relationship with technology and the future of human work.

As researchers and industry leaders continue to explore the complexities of AGI, the need for a collaborative approach becomes increasingly critical. Interdisciplinary dialogue among technologists, ethicists, and policymakers will play a crucial role in shaping a future where AGI can coexist with humanity in a beneficial manner. Looking ahead, the next few years may prove pivotal in determining how AI integrates into various aspects of life, paving the way for a new era of technological evolution.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Google's Demis Hassabis announces the 2026 launch of AI-powered smart glasses featuring in-lens displays, aiming to revitalize the tech's reputation after earlier failures.

Top Stories

Google faces a talent exodus as key AI figures, including DeepMind cofounder Mustafa Suleyman, depart for Microsoft in a $650M hiring spree.

Top Stories

Google DeepMind co-founder Shane Legg warns that AI will first disrupt jobs requiring digital skills, urging immediate workforce adaptation to avoid pervasive job loss.

Top Stories

Mustafa Suleyman of Microsoft AI calls for a humanist approach to AI, warning about job displacement and the urgent need for ethical guidelines in...

Top Stories

Google DeepMind releases the Gemini 3 Pro AI model, enhancing medical AI and gaming, while Anthropic's Claude Code emerges as a potential AGI contender.

Top Stories

Shane Legg of Google DeepMind warns that jobs performed solely on laptops face a 50% risk of AI replacement by 2028, urging deeper societal...

Top Stories

Andrew Ng cautions that while AI investments soar into the hundreds of billions, the path to artificial general intelligence remains distant, raising concerns of...

Top Stories

Shane Legg of Google DeepMind warns that AI could reduce software engineering teams by 80% within a decade, risking many remote cognitive jobs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.