A leading artificial intelligence expert has revised his predictions regarding the timeline for AI to achieve superintelligence, suggesting it will take longer than initially anticipated for these systems to code autonomously. Daniel Kokotajlo, a former employee of OpenAI, ignited a significant debate in April with his scenario, dubbed AI 2027, which envisioned unchecked AI development culminating in the creation of a superintelligence capable of outsmarting world leaders and potentially eliminating humanity.
The AI 2027 scenario quickly garnered a mix of supporters and critics. U.S. Vice President JD Vance appeared to reference it in an interview last May, discussing the competitive landscape of AI development in the United States and China. However, not all reactions were favorable; Gary Marcus, an emeritus professor of neuroscience at New York University, dismissed the piece as “pure science fiction mumbo jumbo.”
Timelines for achieving transformative artificial intelligence, often referred to as AGI (artificial general intelligence), have become a common topic among those concerned with AI safety. The release of ChatGPT in 2022 accelerated these timelines, prompting predictions of AGI’s arrival within mere decades or even years. Kokotajlo and his team initially projected 2027 as the year when AI would attain “fully autonomous coding,” although they acknowledged it was merely the most likely estimate, with some team members envisioning longer timelines.
Doubts about the immediacy of AGI are beginning to surface, alongside questions about the term’s significance. “A lot of other people have been pushing their timelines further out in the past year as they realize how jagged AI performance is,” explained Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report. “For a scenario like AI 2027 to happen, AI would need a lot more practical skills that are useful in real-world complexities.”
Henry Papadatos, executive director of the French AI nonprofit SaferAI, added that the term AGI was more relevant when AI systems were narrowly focused, such as playing chess or Go. “Now we have systems that are quite general already, and the term does not mean as much,” he said.
Kokotajlo’s AI 2027 hinges on the concept that AI agents will autonomously automate coding and AI research and development by 2027, leading to an “intelligence explosion.” This could result in AI systems creating increasingly advanced versions of themselves, with a potential outcome of humanity’s destruction by the mid-2030s to facilitate the establishment of more solar panels and data centers.
However, in their recent update, Kokotajlo and his co-authors have adjusted their expectations, forecasting that AI may achieve autonomous coding in the early 2030s rather than 2027. They have set 2034 as the new target for the development of superintelligence, and notably, they have refrained from speculating on when AI might pose a threat to humanity. “Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still,” Kokotajlo stated in a post on X.
The pursuit of creating AI capable of conducting AI research remains a priority for leading firms in the sector. Sam Altman, CEO of OpenAI, indicated in October that developing an automated AI researcher by March 2028 is an “internal goal,” though he cautioned, “We may totally fail at this goal.”
Andrea Castagna, a Brussels-based AI policy researcher, emphasized the complexities that dramatic AGI timelines fail to address. “The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years,” she noted. “The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that.”
As the conversation around the future of AI continues to evolve, it highlights the need for careful consideration of the implications surrounding these technologies, as timelines and realities may diverge more than previously expected.
See also
Wellhub Launches AI Tool to Personalize Wellness Routines, Targeting Gen Z Engagement
Epson Reveals Gemini AI Upgrade for Google TV in Select Lifestudio Projectors
Shekhar Natarajan Launches Angelic Intelligence to Embed Ethics in AI Decision-Making
EU and South Korea Forge Global AI Regulation Framework at CES 2026
Rokid Launches World’s First Open AI Ecosystem Smart Glasses Weighing Just 38.5 Grams




















































