Connect with us

Hi, what are you looking for?

AI Research

Rethink Priorities Study Reveals Low Probability of Consciousness in Today’s AI Systems

Rethink Priorities’ study reveals current large language models show a median probability of consciousness below initial estimates, emphasizing urgent ethical considerations.

A study conducted by Rethink Priorities has concluded that current large language models (LLMs) are unlikely to possess consciousness, while providing strong evidence for consciousness in chickens and overwhelming support for it in humans. The study, part of the nonprofit research group’s AI Cognition Initiative, introduces a novel analytical framework known as the Digital Consciousness Model, which aggregates evidence from multiple theories of consciousness.

Using this model, researchers assessed various systems, including state-of-the-art LLMs, humans, chickens, and ELIZA, a simple chatbot from the 1960s. The analysis revealed that the evidence for consciousness in today’s AI systems is insufficient to support the notion of awareness, although it cannot be entirely ruled out. The researchers emphasized that as AI systems become increasingly sophisticated, their architectural and cognitive features could alter the likelihood of consciousness, raising significant ethical and policy implications.

“While our findings indicate that today’s AI systems are probably not conscious, this question must be approached with care as advancements continue,” said Derek Shiller, one of the lead researchers. The study suggests that even a small probability of AI consciousness justifies precautionary measures, particularly as the line between human-like interaction and machine behavior continues to blur.

The Digital Consciousness Model is designed to navigate the complexities surrounding the definition and detection of consciousness. Currently, there is no consensus among scientists regarding what constitutes consciousness, with theories ranging from brain structure and information processing to self-awareness and behavioral traits. Rather than determine definitively whether a system is conscious, the model quantifies how strongly evidence supports one view over another.

This Bayesian approach combines hundreds of observable indicators, including flexible attention, self-representation, and goal-directed behavior. Through expert surveys, researchers translated judgments about these indicators into probabilistic updates, allowing for a more systematic evaluation of consciousness across different systems.

When evaluating the evidence, the model assigned a median probability of consciousness to LLMs that fell below the prior probability established at the outset, indicating that the evidence reduced rather than bolstered confidence in their consciousness. In stark contrast, the model strongly supported the existence of consciousness in humans and provided significant evidence for chickens, showcasing a clear hierarchy in consciousness attribution. ELIZA, the early chatbot, was overwhelmingly deemed not conscious, even under the most lenient criteria.

The nuanced findings for AI systems indicate that while evidence tends to weigh against consciousness, certain theoretical perspectives raise probabilities slightly. This divergence is crucial, as it underscores the necessity for ongoing evaluation as AI systems evolve. The researchers caution that the numerical probabilities produced by the model should not be interpreted as precise figures; rather, they are dependent on prior assumptions, which can significantly influence outcomes.

The model serves as a proof of concept, and the authors stress that their work represents an initial step in a broader conversation about machine consciousness. As AI capabilities expand—potentially incorporating features such as persistent memory or advanced self-modeling—the indicators that currently argue against consciousness may begin to shift. This presents a complex landscape for policymakers, developers, and users as they navigate the ethical considerations surrounding AI development.

Importantly, the study highlights the necessity of distinguishing between attributing consciousness to machines and recognizing the subjective experiences of humans and animals. The strong findings supporting chicken consciousness remind us that discussions about AI consciousness are intertwined with ongoing debates regarding animal welfare and moral responsibility.

As the researchers noted, much of the evidence regarding AI systems remains indirect, based on behavioral observations rather than direct access to internal mechanisms. This gap in knowledge poses challenges for external scrutiny and expert consensus. The report offers a comprehensive examination of modeling assumptions, data limitations, and potential biases, urging readers to view the model’s outputs as frameworks for understanding complexity rather than definitive answers.

In conclusion, the study by Rethink Priorities lays the groundwork for a more structured examination of machine consciousness. The Digital Consciousness Model stands as an early attempt to quantify a question that has long resisted systematic inquiry, signaling that as AI capabilities continue to evolve, so too will the implications of their potential consciousness.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Runway introduces its Runway Characters API, allowing instant creation of fully customizable conversational avatars from a single image, transforming digital interactions.

AI Generative

Digital Agency tests generative AI platform "Gennai" with 180,000 staff in May, aiming for enhanced administrative efficiency and a 2027 rollout.

AI Marketing

Dabudai unveils an AI visibility platform to help brands optimize their presence in AI-driven search, ensuring vital recognition in a shifting digital landscape.

AI Generative

Interview Kickstart introduces a rigorous 9-week Advanced Generative AI course for engineers, equipping them with essential skills in AI model design and deployment.

Top Stories

Cohere, valued at $7B, aims to reshape AI in Canada by focusing on customized LLMs, achieving $240M in annual recurring revenue while dismissing AGI...

AI Finance

Guernsey Financial Services Commission endorses AI adoption in finance to boost efficiency, allowing firms to integrate technologies without prior approval.

AI Cybersecurity

AI-driven attacks now infiltrate AWS cloud environments in minutes, leveraging advanced tools to exploit existing vulnerabilities and gain admin access rapidly.

AI Technology

UCSD and Columbia University unveil ChipBench, revealing that top LLMs achieve only 30.74% effectiveness in Verilog generation, highlighting urgent evaluation needs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.