Connect with us

Hi, what are you looking for?

AI Research

UCLA Study Reveals AI’s ‘Body Gap’ Could Compromise Safety and Trustworthiness

UCLA study reveals AI’s lack of internal embodiment could compromise safety and reliability, risking overconfident errors in critical applications.

A new study from UCLA Health highlights significant shortcomings in today’s artificial intelligence systems, particularly in their understanding of human-like experiences. Authored by UCLA Health postdoctoral fellow Akila Kadambi and her colleagues, the research emphasizes the absence of “internal embodiment” in AI, which combines an understanding of external interactions with self-awareness of internal states. Published in the journal Neuron, the study suggests that this gap could hinder the reliability and safety of AI models as they are increasingly deployed in critical scenarios.

Humans seamlessly integrate bodily experiences when performing tasks, such as passing the salt at a dinner table. This intricate coordination involves not just physical movement but also an innate awareness of one’s own state—whether one feels tired or uncertain. Kadambi pointed out that while current AI focuses on “external embodiment,” or interacting with the world, the internal dynamics that regulate human behavior remain largely unexplored by AI researchers. “If you’re uncertain, if you’re depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent,” she stated. This raises concerns as AI systems increasingly influence decision-making processes in various sectors.

The research particularly targets multimodal large language models, the technology behind tools like ChatGPT and Google’s Gemini. Although these systems can generate text and analyze images, they lack the capacity to comprehend feelings like thirst or fatigue. The authors illustrate this limitation through a perceptual test involving a simple point-light display—a series of dots that represent a human figure in motion. While humans easily recognize these figures, many AI models failed to do so, demonstrating a disconnect in their performance. In some cases, one AI described the display as a constellation of stars; when the image was rotated, even the best-performing models struggled.

This inability to recognize human-like patterns underscores a fundamental difference between human perception, informed by lived experience, and AI’s pattern-matching capabilities. “AI systems, trained on vast libraries of text and images but with no bodily experience, are pattern-matching without that anchor,” the researchers noted. They argue that this lack of internal embodiment is not just a performance issue but also a potential safety risk. Current AI systems operate without mechanisms that ensure self-regulation or minimize overconfidence in their outputs, which could lead to errors in high-stakes environments.

The study makes a critical distinction between “external embodiment,” which encompasses a system’s engagement with its environment, and “internal embodiment,” defined as continuous monitoring of internal states such as uncertainty or fatigue. Humans automatically regulate these internal states, using them to affect attention, memory, and social behavior. In contrast, current AI models lack any persistent internal state to guide their operations over time. “This is not just a performance limitation, but also a safety limitation,” noted Dr. Marco Iacoboni, a senior author of the paper. “Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently.”

As AI technology continues to advance, the authors propose a “dual-embodiment framework” that could provide principles for developing AI systems capable of simulating both their interactions with the external world and their internal conditions. These internal signals could track variables such as uncertainty, processing load, and confidence, potentially constraining the AI’s behavior and ensuring more reliable outputs over time.

The researchers also emphasize the need for new benchmarks to evaluate AI systems’ internal embodiment, which is currently overlooked in favor of assessments focused mostly on external performance. They advocate for tests that explore whether AI systems can monitor their own internal states and maintain stability when faced with challenges. “If we want AI systems that are genuinely aligned with human behavior—not just superficially fluent—we may need to give them vulnerabilities and checks that function like internal self-regulators,” Iacoboni concluded.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Los Angeles emerges as a vital AI hub by 2026, housing 10 innovative companies like Snap Inc. and GumGum driving significant advancements in practical...

Top Stories

NVIDIA's specialized AI agents boost CrowdStrike's accuracy to 98.5%, slashing manual efforts tenfold and transforming enterprise workflows across sectors.

AI Research

UCLA Health researchers warn that AI stroke and seizure detection tools risk worsening health disparities, emphasizing the need for diverse training data to ensure...

Top Stories

LLNL enhances high-energy laser systems with AI-driven real-time control, enabling over 10 rapid-fire shots per second to revolutionize fusion energy research.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.