A new study from UCLA Health highlights significant shortcomings in today’s artificial intelligence systems, particularly in their understanding of human-like experiences. Authored by UCLA Health postdoctoral fellow Akila Kadambi and her colleagues, the research emphasizes the absence of “internal embodiment” in AI, which combines an understanding of external interactions with self-awareness of internal states. Published in the journal Neuron, the study suggests that this gap could hinder the reliability and safety of AI models as they are increasingly deployed in critical scenarios.
Humans seamlessly integrate bodily experiences when performing tasks, such as passing the salt at a dinner table. This intricate coordination involves not just physical movement but also an innate awareness of one’s own state—whether one feels tired or uncertain. Kadambi pointed out that while current AI focuses on “external embodiment,” or interacting with the world, the internal dynamics that regulate human behavior remain largely unexplored by AI researchers. “If you’re uncertain, if you’re depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent,” she stated. This raises concerns as AI systems increasingly influence decision-making processes in various sectors.
The research particularly targets multimodal large language models, the technology behind tools like ChatGPT and Google’s Gemini. Although these systems can generate text and analyze images, they lack the capacity to comprehend feelings like thirst or fatigue. The authors illustrate this limitation through a perceptual test involving a simple point-light display—a series of dots that represent a human figure in motion. While humans easily recognize these figures, many AI models failed to do so, demonstrating a disconnect in their performance. In some cases, one AI described the display as a constellation of stars; when the image was rotated, even the best-performing models struggled.
This inability to recognize human-like patterns underscores a fundamental difference between human perception, informed by lived experience, and AI’s pattern-matching capabilities. “AI systems, trained on vast libraries of text and images but with no bodily experience, are pattern-matching without that anchor,” the researchers noted. They argue that this lack of internal embodiment is not just a performance issue but also a potential safety risk. Current AI systems operate without mechanisms that ensure self-regulation or minimize overconfidence in their outputs, which could lead to errors in high-stakes environments.
The study makes a critical distinction between “external embodiment,” which encompasses a system’s engagement with its environment, and “internal embodiment,” defined as continuous monitoring of internal states such as uncertainty or fatigue. Humans automatically regulate these internal states, using them to affect attention, memory, and social behavior. In contrast, current AI models lack any persistent internal state to guide their operations over time. “This is not just a performance limitation, but also a safety limitation,” noted Dr. Marco Iacoboni, a senior author of the paper. “Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently.”
As AI technology continues to advance, the authors propose a “dual-embodiment framework” that could provide principles for developing AI systems capable of simulating both their interactions with the external world and their internal conditions. These internal signals could track variables such as uncertainty, processing load, and confidence, potentially constraining the AI’s behavior and ensuring more reliable outputs over time.
The researchers also emphasize the need for new benchmarks to evaluate AI systems’ internal embodiment, which is currently overlooked in favor of assessments focused mostly on external performance. They advocate for tests that explore whether AI systems can monitor their own internal states and maintain stability when faced with challenges. “If we want AI systems that are genuinely aligned with human behavior—not just superficially fluent—we may need to give them vulnerabilities and checks that function like internal self-regulators,” Iacoboni concluded.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions



















































