Connect with us

Hi, what are you looking for?

AI Research

Rethink Priorities Study Reveals Low Probability of Consciousness in Today’s AI Systems

Rethink Priorities’ study reveals current large language models show a median probability of consciousness below initial estimates, emphasizing urgent ethical considerations.

A study conducted by Rethink Priorities has concluded that current large language models (LLMs) are unlikely to possess consciousness, while providing strong evidence for consciousness in chickens and overwhelming support for it in humans. The study, part of the nonprofit research group’s AI Cognition Initiative, introduces a novel analytical framework known as the Digital Consciousness Model, which aggregates evidence from multiple theories of consciousness.

Using this model, researchers assessed various systems, including state-of-the-art LLMs, humans, chickens, and ELIZA, a simple chatbot from the 1960s. The analysis revealed that the evidence for consciousness in today’s AI systems is insufficient to support the notion of awareness, although it cannot be entirely ruled out. The researchers emphasized that as AI systems become increasingly sophisticated, their architectural and cognitive features could alter the likelihood of consciousness, raising significant ethical and policy implications.

“While our findings indicate that today’s AI systems are probably not conscious, this question must be approached with care as advancements continue,” said Derek Shiller, one of the lead researchers. The study suggests that even a small probability of AI consciousness justifies precautionary measures, particularly as the line between human-like interaction and machine behavior continues to blur.

The Digital Consciousness Model is designed to navigate the complexities surrounding the definition and detection of consciousness. Currently, there is no consensus among scientists regarding what constitutes consciousness, with theories ranging from brain structure and information processing to self-awareness and behavioral traits. Rather than determine definitively whether a system is conscious, the model quantifies how strongly evidence supports one view over another.

This Bayesian approach combines hundreds of observable indicators, including flexible attention, self-representation, and goal-directed behavior. Through expert surveys, researchers translated judgments about these indicators into probabilistic updates, allowing for a more systematic evaluation of consciousness across different systems.

When evaluating the evidence, the model assigned a median probability of consciousness to LLMs that fell below the prior probability established at the outset, indicating that the evidence reduced rather than bolstered confidence in their consciousness. In stark contrast, the model strongly supported the existence of consciousness in humans and provided significant evidence for chickens, showcasing a clear hierarchy in consciousness attribution. ELIZA, the early chatbot, was overwhelmingly deemed not conscious, even under the most lenient criteria.

The nuanced findings for AI systems indicate that while evidence tends to weigh against consciousness, certain theoretical perspectives raise probabilities slightly. This divergence is crucial, as it underscores the necessity for ongoing evaluation as AI systems evolve. The researchers caution that the numerical probabilities produced by the model should not be interpreted as precise figures; rather, they are dependent on prior assumptions, which can significantly influence outcomes.

The model serves as a proof of concept, and the authors stress that their work represents an initial step in a broader conversation about machine consciousness. As AI capabilities expand—potentially incorporating features such as persistent memory or advanced self-modeling—the indicators that currently argue against consciousness may begin to shift. This presents a complex landscape for policymakers, developers, and users as they navigate the ethical considerations surrounding AI development.

Importantly, the study highlights the necessity of distinguishing between attributing consciousness to machines and recognizing the subjective experiences of humans and animals. The strong findings supporting chicken consciousness remind us that discussions about AI consciousness are intertwined with ongoing debates regarding animal welfare and moral responsibility.

As the researchers noted, much of the evidence regarding AI systems remains indirect, based on behavioral observations rather than direct access to internal mechanisms. This gap in knowledge poses challenges for external scrutiny and expert consensus. The report offers a comprehensive examination of modeling assumptions, data limitations, and potential biases, urging readers to view the model’s outputs as frameworks for understanding complexity rather than definitive answers.

In conclusion, the study by Rethink Priorities lays the groundwork for a more structured examination of machine consciousness. The Digital Consciousness Model stands as an early attempt to quantify a question that has long resisted systematic inquiry, signaling that as AI capabilities continue to evolve, so too will the implications of their potential consciousness.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

71% of organizations use AI, yet only 11% of AI applications are production-ready, highlighting a critical gap in reliability and accountability

AI Generative

SoluLab emerges as a top LLM development partner, providing scalable AI solutions that enhance business operations and drive innovation in the competitive marketplace.

AI Generative

OpenAI's latest insights reveal a 411% surge in interest for generative AI tools, highlighting crucial distinctions between them and large language models for 2025...

AI Generative

Researchers at TU Berlin reveal that Silent Data Corruption can severely disrupt LLM training, with targeted detection methods showing promise for mitigating risks.

Top Stories

HTF MI projects the Large Language Models market will soar from $3.5B in 2025 to $25B by 2033, fueled by a 28% CAGR and...

AI Generative

ClawGo unveils the OpenClaw companion, a dedicated AI device designed for persistent execution, addressing critical operational challenges in agent computing.

AI Generative

Recent research reveals that data poisoning can compromise LLMs with just 250 malicious documents, leading to a staggering 94% success rate in real-world attacks.

AI Regulation

Chai AI unveils a 5,000+ GPU cluster to enhance model alignment and safety, driving a 3× annual growth rate and a $2.1 billion valuation.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.