Connect with us

Hi, what are you looking for?

AI Research

Rethink Priorities Study Reveals Low Probability of Consciousness in Today’s AI Systems

Rethink Priorities’ study reveals current large language models show a median probability of consciousness below initial estimates, emphasizing urgent ethical considerations.

A study conducted by Rethink Priorities has concluded that current large language models (LLMs) are unlikely to possess consciousness, while providing strong evidence for consciousness in chickens and overwhelming support for it in humans. The study, part of the nonprofit research group’s AI Cognition Initiative, introduces a novel analytical framework known as the Digital Consciousness Model, which aggregates evidence from multiple theories of consciousness.

Using this model, researchers assessed various systems, including state-of-the-art LLMs, humans, chickens, and ELIZA, a simple chatbot from the 1960s. The analysis revealed that the evidence for consciousness in today’s AI systems is insufficient to support the notion of awareness, although it cannot be entirely ruled out. The researchers emphasized that as AI systems become increasingly sophisticated, their architectural and cognitive features could alter the likelihood of consciousness, raising significant ethical and policy implications.

“While our findings indicate that today’s AI systems are probably not conscious, this question must be approached with care as advancements continue,” said Derek Shiller, one of the lead researchers. The study suggests that even a small probability of AI consciousness justifies precautionary measures, particularly as the line between human-like interaction and machine behavior continues to blur.

The Digital Consciousness Model is designed to navigate the complexities surrounding the definition and detection of consciousness. Currently, there is no consensus among scientists regarding what constitutes consciousness, with theories ranging from brain structure and information processing to self-awareness and behavioral traits. Rather than determine definitively whether a system is conscious, the model quantifies how strongly evidence supports one view over another.

This Bayesian approach combines hundreds of observable indicators, including flexible attention, self-representation, and goal-directed behavior. Through expert surveys, researchers translated judgments about these indicators into probabilistic updates, allowing for a more systematic evaluation of consciousness across different systems.

When evaluating the evidence, the model assigned a median probability of consciousness to LLMs that fell below the prior probability established at the outset, indicating that the evidence reduced rather than bolstered confidence in their consciousness. In stark contrast, the model strongly supported the existence of consciousness in humans and provided significant evidence for chickens, showcasing a clear hierarchy in consciousness attribution. ELIZA, the early chatbot, was overwhelmingly deemed not conscious, even under the most lenient criteria.

The nuanced findings for AI systems indicate that while evidence tends to weigh against consciousness, certain theoretical perspectives raise probabilities slightly. This divergence is crucial, as it underscores the necessity for ongoing evaluation as AI systems evolve. The researchers caution that the numerical probabilities produced by the model should not be interpreted as precise figures; rather, they are dependent on prior assumptions, which can significantly influence outcomes.

The model serves as a proof of concept, and the authors stress that their work represents an initial step in a broader conversation about machine consciousness. As AI capabilities expand—potentially incorporating features such as persistent memory or advanced self-modeling—the indicators that currently argue against consciousness may begin to shift. This presents a complex landscape for policymakers, developers, and users as they navigate the ethical considerations surrounding AI development.

Importantly, the study highlights the necessity of distinguishing between attributing consciousness to machines and recognizing the subjective experiences of humans and animals. The strong findings supporting chicken consciousness remind us that discussions about AI consciousness are intertwined with ongoing debates regarding animal welfare and moral responsibility.

As the researchers noted, much of the evidence regarding AI systems remains indirect, based on behavioral observations rather than direct access to internal mechanisms. This gap in knowledge poses challenges for external scrutiny and expert consensus. The report offers a comprehensive examination of modeling assumptions, data limitations, and potential biases, urging readers to view the model’s outputs as frameworks for understanding complexity rather than definitive answers.

In conclusion, the study by Rethink Priorities lays the groundwork for a more structured examination of machine consciousness. The Digital Consciousness Model stands as an early attempt to quantify a question that has long resisted systematic inquiry, signaling that as AI capabilities continue to evolve, so too will the implications of their potential consciousness.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

Interview Kickstart launches a 2026 Advanced GenAI Course on large language models and diffusion systems to meet the projected $190 billion AI market demand.

AI Cybersecurity

Security teams face a critical AI security gap as traditional tools falter against new compliance mandates and evolving threats, risking sensitive data in cloud...

Top Stories

EU's AI Act mandates strict compliance for tech giants like Microsoft and Alphabet as new regulations reshape the global AI landscape by 2026.

AI Cybersecurity

Kaspersky forecasts that by 2026, the rise of AI and deepfake technology will significantly escalate cybersecurity risks, compelling organizations to enhance defensive measures.

AI Generative

Virginia Tech researchers develop an AI-driven Building Energy Management System achieving 86% accuracy in energy control, revolutionizing smart building efficiency.

Top Stories

MIT Technology Review warns that AI's future hinges on the uncertain performance of large language models, amid public backlash against a $500B initiative by...

Top Stories

Investors are flocking to AI firms with strong growth potential, as tech giants like Microsoft and IBM lead a $100 billion market shift in...

AI Generative

Uppsala University’s study reveals that optimizing SRAM size and operating frequencies between 1200MHz and 1400MHz can significantly reduce LLM energy consumption by balancing static...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.