Shane Legg, a co-founder of DeepMind, has set the likelihood of achieving “minimal AGI” at 50 percent by 2028. In a recent interview with Hannah Fry, Legg outlined his conceptual framework for artificial general intelligence (AGI), distinguishing between minimal AGI, full AGI, and artificial superintelligence (ASI). He defines minimal AGI as an artificial agent capable of performing cognitive tasks typically managed by most humans, while full AGI encompasses the entirety of human cognitive abilities, including extraordinary achievements such as formulating new scientific theories and composing symphonies.
Legg’s predictions suggest that minimal AGI could be realized within the next two years, with the broader full AGI potentially emerging three to six years thereafter. To assess progress toward these milestones, he proposes a rigorous testing methodology. This would involve an AI system passing all standard human cognitive tasks, alongside exhaustive evaluations by human teams searching for any weaknesses over an extended period, with unrestricted access to the system’s inner workings.
The timeline Legg envisions aligns with a growing interest in AGI among both researchers and technology companies. As advancements in machine learning and neural networks accelerate, the debate surrounding the implications of AGI intensifies. Companies increasingly recognize the potential of AGI to revolutionize various sectors, from healthcare to finance, by automating complex decision-making processes that currently require human intervention.
Legg’s insights reflect an optimistic yet cautious stance on the future of AI. While he acknowledges the advancements made in recent years, he emphasizes that achieving full AGI involves navigating significant technical challenges. The scale he proposes serves as a guide for researchers to comprehensively evaluate AI systems’ cognitive capabilities. His assertion that minimal AGI could be reached in just a couple of years contrasts with the more conservative predictions from other experts in the field, who often point to the multifaceted hurdles still to be overcome.
The conversation surrounding AGI is not just academic; it has profound implications for society. The transition from minimal AGI to full AGI could fundamentally alter the workforce, raising questions about job displacement and the ethical considerations of deploying such powerful technologies. As companies invest heavily in AI research, the societal impacts of AGI development will require careful consideration and proactive regulatory frameworks to ensure responsible use.
As the timeline for achieving minimal AGI draws closer, the global implications of such advancements remain a focal point for industry stakeholders. The acceleration of AI technologies, coupled with Legg’s projections, suggests an imminent transformation in how we understand intelligence itself. The potential for AI systems to rival human cognitive abilities raises significant questions about our relationship with technology and the future of human work.
As researchers and industry leaders continue to explore the complexities of AGI, the need for a collaborative approach becomes increasingly critical. Interdisciplinary dialogue among technologists, ethicists, and policymakers will play a crucial role in shaping a future where AGI can coexist with humanity in a beneficial manner. Looking ahead, the next few years may prove pivotal in determining how AI integrates into various aspects of life, paving the way for a new era of technological evolution.
See also
India Surges to Third in Stanford AI Competitiveness Index, Overtaking UK and South Korea
Nvidia’s H200 Export Decision Fuels Alibaba’s AI Growth Amid Geopolitical Risks
AI Breakthrough by DoMore Diagnostics Revolutionizes Colorectal Cancer Detection, Minimizes Chemotherapy Risks
Microsoft AI Chief Mustafa Suleyman Calls Elon Musk a ‘Bulldozer,’ Praises Sam Altman’s Courage



















































