Connect with us

Hi, what are you looking for?

Top Stories

AI Expert Pushes Back Timeline for Superintelligence, Now Sees 2034 as New Milestone

AI expert Daniel Kokotajlo revises his timeline for superintelligence to 2034, acknowledging slower-than-expected progress in autonomous coding.

A leading artificial intelligence expert has revised his predictions regarding the timeline for AI to achieve superintelligence, suggesting it will take longer than initially anticipated for these systems to code autonomously. Daniel Kokotajlo, a former employee of OpenAI, ignited a significant debate in April with his scenario, dubbed AI 2027, which envisioned unchecked AI development culminating in the creation of a superintelligence capable of outsmarting world leaders and potentially eliminating humanity.

The AI 2027 scenario quickly garnered a mix of supporters and critics. U.S. Vice President JD Vance appeared to reference it in an interview last May, discussing the competitive landscape of AI development in the United States and China. However, not all reactions were favorable; Gary Marcus, an emeritus professor of neuroscience at New York University, dismissed the piece as “pure science fiction mumbo jumbo.”

Timelines for achieving transformative artificial intelligence, often referred to as AGI (artificial general intelligence), have become a common topic among those concerned with AI safety. The release of ChatGPT in 2022 accelerated these timelines, prompting predictions of AGI’s arrival within mere decades or even years. Kokotajlo and his team initially projected 2027 as the year when AI would attain “fully autonomous coding,” although they acknowledged it was merely the most likely estimate, with some team members envisioning longer timelines.

Doubts about the immediacy of AGI are beginning to surface, alongside questions about the term’s significance. “A lot of other people have been pushing their timelines further out in the past year as they realize how jagged AI performance is,” explained Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report. “For a scenario like AI 2027 to happen, AI would need a lot more practical skills that are useful in real-world complexities.”

Henry Papadatos, executive director of the French AI nonprofit SaferAI, added that the term AGI was more relevant when AI systems were narrowly focused, such as playing chess or Go. “Now we have systems that are quite general already, and the term does not mean as much,” he said.

Kokotajlo’s AI 2027 hinges on the concept that AI agents will autonomously automate coding and AI research and development by 2027, leading to an “intelligence explosion.” This could result in AI systems creating increasingly advanced versions of themselves, with a potential outcome of humanity’s destruction by the mid-2030s to facilitate the establishment of more solar panels and data centers.

However, in their recent update, Kokotajlo and his co-authors have adjusted their expectations, forecasting that AI may achieve autonomous coding in the early 2030s rather than 2027. They have set 2034 as the new target for the development of superintelligence, and notably, they have refrained from speculating on when AI might pose a threat to humanity. “Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still,” Kokotajlo stated in a post on X.

The pursuit of creating AI capable of conducting AI research remains a priority for leading firms in the sector. Sam Altman, CEO of OpenAI, indicated in October that developing an automated AI researcher by March 2028 is an “internal goal,” though he cautioned, “We may totally fail at this goal.”

Andrea Castagna, a Brussels-based AI policy researcher, emphasized the complexities that dramatic AGI timelines fail to address. “The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years,” she noted. “The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that.”

As the conversation around the future of AI continues to evolve, it highlights the need for careful consideration of the implications surrounding these technologies, as timelines and realities may diverge more than previously expected.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Makerfire adopts the USX51 AI Flight Controller, integrating 10 TOPS edge AI for enhanced autonomous decision-making in industrial drone operations.

Top Stories

Mistral AI secures €1.7 billion funding, positioning itself as Europe's leading generative AI player with a valuation between $6 billion and $14 billion.

Top Stories

Minneapolis City Council proposes legalizing bathhouses to enhance LGBTQ+ health and safety, with a focus on consent and community input amid rising public interest.

AI Generative

Generative AI techniques advance rapidly with models like OpenAI's GPT-4 transforming content creation, raising ethical challenges around bias and misinformation.

AI Technology

Alibaba invests $300 million in AI video startup ShengShu, aiming to lead the burgeoning text-to-video market amid rising global competition.

AI Technology

Anthropic embarks on custom AI chip development to enhance supply chain stability and control, targeting $30 billion in revenue as competition intensifies.

Top Stories

OpenAI introduces a $100 monthly ChatGPT Pro plan, offering five times more Codex capabilities than its Plus plan, enhancing competition with Anthropic's Claude.

Top Stories

Florida Attorney General James Uthmeier initiates a formal investigation into OpenAI's ChatGPT over potential public safety risks and its role in a mass shooting.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.