Connect with us

Hi, what are you looking for?

Top Stories

Google DeepMind Reveals LLMs Can’t Achieve Consciousness, Challenging AGI Claims

Google DeepMind’s Alexander Lerchner claims AI can’t achieve consciousness, challenging AGI narratives and revealing it as mere advanced simulation.

A recent publication from Google DeepMind has ignited discussions around the potential of artificial intelligence, particularly its relation to consciousness. The paper, authored by DeepMind scientist Alexander Lerchner, directly contradicts CEO Demis Hassabis‘s assertions regarding the imminent development of artificial general intelligence (AGI). Lerchner’s findings suggest that consciousness in AI systems is unattainable, despite the industry’s prevailing belief that complex computations can lead to conscious experience.

In his March 2026 paper, Lerchner introduces the concept of the “abstraction fallacy,” emphasizing that AI systems only simulate consciousness without genuinely experiencing it. He compares this to a celebrity impersonator who, despite delivering an impressive performance, lacks the authenticity of the actual celebrity. This framework challenges the assumption that impressive language manipulation by AI indicates any internal consciousness.

Central to Lerchner’s argument is the idea of “mapmaker dependency.” He posits that AI systems rely on humans to categorize and label the complexities of reality into forms the machines can understand. For example, the human workers who label training images are essential in creating the meanings that large language models (LLMs) appear to generate independently.

Lerchner further asserts that true consciousness necessitates a physical presence and intrinsic drives—a characteristic absent in digital systems. Johannes Jäger, an evolutionary systems biologist, reinforces this point by stating, “You have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that.” Without physical embodiment, LLMs remain mere “patterns on a hard drive,” activated only when prompted, devoid of any internal motivation or meaning beyond human-defined tasks.

This distinction—between simulation and genuine instantiation—suggests that AGI, in the absence of consciousness, may simply be a highly advanced tool rather than a sentient being. The implications of this revelation are profound, as it necessitates a reevaluation of the narratives that companies like Google have promoted regarding the capabilities of their AI technologies.

While Lerchner’s arguments are being received with interest, they do not represent a groundbreaking shift in the discourse surrounding AI consciousness. Philosophy professors and leading researchers in the field, including Mark Bishop from Goldsmiths University, acknowledge the validity of Lerchner’s arguments but also point out that similar positions have been articulated previously. Bishop states that he supports “99 percent” of Lerchner’s conclusions but finds it unsurprising that Google allowed the publication, given its existing narrative promoting AGI.

This situation creates a credibility paradox for those assessing claims made by AI companies. When internal researchers publish findings that contradict the corporate vision, it lays bare the divide between marketing strategies and scientific realities. The presence of such mixed signals raises questions about whether this discrepancy reflects a genuine uncertainty within the company or a calculated positioning in an ongoing debate about consciousness in machines.

As the discussion surrounding AI and consciousness continues to evolve, the need for transparency in the technology sector remains critical. The distinction between what AI can simulate and what it truly embodies will shape both public perception and future research directions. This ongoing dialogue may ultimately influence the trajectory of AI development and the ethical considerations that accompany it.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Lumai unveils the Iris inference server, the world's first optical system enabling real-time execution of billion-parameter AI models with 90% lower energy consumption.

AI Generative

Google DeepMind unveils Vision Banana, an AI model that leverages the Nano Banana generative framework for superior image generation and analysis, outperforming traditional methods.

AI Government

South Korea partners with Google DeepMind to launch the world’s first "AI Campus" in Seoul, aiming to elevate its global AI status amid fierce...

Top Stories

DeepMind’s Demis Hassabis meets Go grandmaster Lee Se-dol in Seoul to mark 10 years since their historic AlphaGo match and discuss AI advancements with...

AI Cybersecurity

AI integration in corporate workflows demands stringent data access permissions to prevent sensitive information leaks, with shadow AI practices posing significant security risks.

AI Education

Educators urge a shift from electronics to critical thinking in classrooms, as AI tools like ChatGPT risk diminishing students' analytical skills.

Top Stories

Google DeepMind promotes Alexandre Moufarek to Director of Product Management, enhancing AI integration in gaming through innovative research and experience.

Top Stories

Google DeepMind forms a dedicated team led by Sebastian Borgeaud to enhance AI coding models, aiming to close the 50% coding gap with Anthropic's...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.