Connect with us

Hi, what are you looking for?

Top Stories

AI Expert Pushes Back Timeline for Superintelligence, Now Sees 2034 as New Milestone

AI expert Daniel Kokotajlo revises his timeline for superintelligence to 2034, acknowledging slower-than-expected progress in autonomous coding.

A leading artificial intelligence expert has revised his predictions regarding the timeline for AI to achieve superintelligence, suggesting it will take longer than initially anticipated for these systems to code autonomously. Daniel Kokotajlo, a former employee of OpenAI, ignited a significant debate in April with his scenario, dubbed AI 2027, which envisioned unchecked AI development culminating in the creation of a superintelligence capable of outsmarting world leaders and potentially eliminating humanity.

The AI 2027 scenario quickly garnered a mix of supporters and critics. U.S. Vice President JD Vance appeared to reference it in an interview last May, discussing the competitive landscape of AI development in the United States and China. However, not all reactions were favorable; Gary Marcus, an emeritus professor of neuroscience at New York University, dismissed the piece as “pure science fiction mumbo jumbo.”

Timelines for achieving transformative artificial intelligence, often referred to as AGI (artificial general intelligence), have become a common topic among those concerned with AI safety. The release of ChatGPT in 2022 accelerated these timelines, prompting predictions of AGI’s arrival within mere decades or even years. Kokotajlo and his team initially projected 2027 as the year when AI would attain “fully autonomous coding,” although they acknowledged it was merely the most likely estimate, with some team members envisioning longer timelines.

Doubts about the immediacy of AGI are beginning to surface, alongside questions about the term’s significance. “A lot of other people have been pushing their timelines further out in the past year as they realize how jagged AI performance is,” explained Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report. “For a scenario like AI 2027 to happen, AI would need a lot more practical skills that are useful in real-world complexities.”

Henry Papadatos, executive director of the French AI nonprofit SaferAI, added that the term AGI was more relevant when AI systems were narrowly focused, such as playing chess or Go. “Now we have systems that are quite general already, and the term does not mean as much,” he said.

Kokotajlo’s AI 2027 hinges on the concept that AI agents will autonomously automate coding and AI research and development by 2027, leading to an “intelligence explosion.” This could result in AI systems creating increasingly advanced versions of themselves, with a potential outcome of humanity’s destruction by the mid-2030s to facilitate the establishment of more solar panels and data centers.

However, in their recent update, Kokotajlo and his co-authors have adjusted their expectations, forecasting that AI may achieve autonomous coding in the early 2030s rather than 2027. They have set 2034 as the new target for the development of superintelligence, and notably, they have refrained from speculating on when AI might pose a threat to humanity. “Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still,” Kokotajlo stated in a post on X.

The pursuit of creating AI capable of conducting AI research remains a priority for leading firms in the sector. Sam Altman, CEO of OpenAI, indicated in October that developing an automated AI researcher by March 2028 is an “internal goal,” though he cautioned, “We may totally fail at this goal.”

Andrea Castagna, a Brussels-based AI policy researcher, emphasized the complexities that dramatic AGI timelines fail to address. “The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years,” she noted. “The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that.”

As the conversation around the future of AI continues to evolve, it highlights the need for careful consideration of the implications surrounding these technologies, as timelines and realities may diverge more than previously expected.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OpenAI's $500B "Stargate" initiative stalls amid leadership disputes and financing issues, forcing a shift to partnerships with Oracle and SoftBank for data center capacity.

AI Generative

OpenAI faces defamation lawsuits in multiple countries, as generative AI's false outputs provoke significant legal challenges and reputational risks for public figures.

AI Cybersecurity

Anthropic's Claude Code Security uncovers over 500 vulnerabilities, triggering sharp declines in cybersecurity stocks like JFrog by 24% and CrowdStrike by 10%

Top Stories

Gamma, Perplexity AI, and Runway are revolutionizing productivity and creativity, enabling users to create presentations, streamline research, and edit videos significantly faster and with...

AI Generative

OpenAI unveils top free AI generator apps for 2024, enabling users to create stunning visuals and content, democratizing technology for all.

Top Stories

Sam Altman, CEO of OpenAI, faces criticism for dismissing claims of AI's water usage as "totally fake" while advocating for sustainable energy in tech...

Top Stories

China's MiniMax and Zhipu stocks soar over 500% as investors flock to AI leaders, igniting a transformative investment boom in the tech sector.

AI Generative

OpenAI retires GPT-4o and other legacy models to streamline offerings, focusing on advanced AI systems as user demand evolves and safety concerns grow

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.