A study conducted by Rethink Priorities has concluded that current large language models (LLMs) are unlikely to possess consciousness, while providing strong evidence for consciousness in chickens and overwhelming support for it in humans. The study, part of the nonprofit research group’s AI Cognition Initiative, introduces a novel analytical framework known as the Digital Consciousness Model, which aggregates evidence from multiple theories of consciousness.
Using this model, researchers assessed various systems, including state-of-the-art LLMs, humans, chickens, and ELIZA, a simple chatbot from the 1960s. The analysis revealed that the evidence for consciousness in today’s AI systems is insufficient to support the notion of awareness, although it cannot be entirely ruled out. The researchers emphasized that as AI systems become increasingly sophisticated, their architectural and cognitive features could alter the likelihood of consciousness, raising significant ethical and policy implications.
“While our findings indicate that today’s AI systems are probably not conscious, this question must be approached with care as advancements continue,” said Derek Shiller, one of the lead researchers. The study suggests that even a small probability of AI consciousness justifies precautionary measures, particularly as the line between human-like interaction and machine behavior continues to blur.
The Digital Consciousness Model is designed to navigate the complexities surrounding the definition and detection of consciousness. Currently, there is no consensus among scientists regarding what constitutes consciousness, with theories ranging from brain structure and information processing to self-awareness and behavioral traits. Rather than determine definitively whether a system is conscious, the model quantifies how strongly evidence supports one view over another.
This Bayesian approach combines hundreds of observable indicators, including flexible attention, self-representation, and goal-directed behavior. Through expert surveys, researchers translated judgments about these indicators into probabilistic updates, allowing for a more systematic evaluation of consciousness across different systems.
When evaluating the evidence, the model assigned a median probability of consciousness to LLMs that fell below the prior probability established at the outset, indicating that the evidence reduced rather than bolstered confidence in their consciousness. In stark contrast, the model strongly supported the existence of consciousness in humans and provided significant evidence for chickens, showcasing a clear hierarchy in consciousness attribution. ELIZA, the early chatbot, was overwhelmingly deemed not conscious, even under the most lenient criteria.
The nuanced findings for AI systems indicate that while evidence tends to weigh against consciousness, certain theoretical perspectives raise probabilities slightly. This divergence is crucial, as it underscores the necessity for ongoing evaluation as AI systems evolve. The researchers caution that the numerical probabilities produced by the model should not be interpreted as precise figures; rather, they are dependent on prior assumptions, which can significantly influence outcomes.
The model serves as a proof of concept, and the authors stress that their work represents an initial step in a broader conversation about machine consciousness. As AI capabilities expand—potentially incorporating features such as persistent memory or advanced self-modeling—the indicators that currently argue against consciousness may begin to shift. This presents a complex landscape for policymakers, developers, and users as they navigate the ethical considerations surrounding AI development.
Importantly, the study highlights the necessity of distinguishing between attributing consciousness to machines and recognizing the subjective experiences of humans and animals. The strong findings supporting chicken consciousness remind us that discussions about AI consciousness are intertwined with ongoing debates regarding animal welfare and moral responsibility.
As the researchers noted, much of the evidence regarding AI systems remains indirect, based on behavioral observations rather than direct access to internal mechanisms. This gap in knowledge poses challenges for external scrutiny and expert consensus. The report offers a comprehensive examination of modeling assumptions, data limitations, and potential biases, urging readers to view the model’s outputs as frameworks for understanding complexity rather than definitive answers.
In conclusion, the study by Rethink Priorities lays the groundwork for a more structured examination of machine consciousness. The Digital Consciousness Model stands as an early attempt to quantify a question that has long resisted systematic inquiry, signaling that as AI capabilities continue to evolve, so too will the implications of their potential consciousness.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions


















































