By Mohamed Al Elew, Khari Johnson, and Levi Sumagaysay, CalMatters
Artificial intelligence has made significant strides in generating lifelike video footage from simple text prompts, yet it continues to grapple with replicating complex human movements, particularly in dance. A recent collaboration between CalMatters and The Markup explored this issue by surveying dancers and choreographers on the potential disruptive impact of AI in their field. The consensus was clear: human dancers are irreplaceable.
To substantiate these findings, the teams conducted tests using four commercially available generative AI video models—Sora 2 by OpenAI, Veo 3.1 by Google, Kling 2.5 by Kuaishou, and Hailou 2.3 by MiniMax—examining their ability to create videos depicting various dance styles. The study involved generating a total of 36 videos across nine different cultural, modern, and popular dance forms. While the AI models produced videos that seemed convincingly realistic, none accurately represented the dances specified in the prompts.
In their evaluation, roughly a third of the videos displayed inconsistencies in appearance and movement, such as abrupt changes in a dancer’s attire or limb coordination. Despite these shortcomings, the results showed marked improvement compared to initial assessments conducted in late 2024.
The methodology involved drafting video prompts that spanned a range of settings—from stages to classrooms—and dance styles, including the Macarena and folklorico. Each model was then tested with these prompts, assessing six criteria, including whether the main subject danced as specified, maintained consistent physical appearance, and produced realistic motions. All but one of the generated videos featured a dancing figure; however, none accurately performed the dances described.
For example, a video prompted for the Cahuilla Band of Indians bird dance received criticism from tribal member Emily Clarke, who stated, “None of these depictions are anywhere close to bird dancing, in my opinion.” Yet, some generated videos, such as those produced by Veo 3.1, were praised for their lifelike representation, even if they did not accurately depict the intended dance.
Issues of motion and appearance consistency were prevalent across the generated content, with 11 out of the 36 videos exhibiting significant anomalies. Reviewers noted that changes in clothing, hair, or limb structure created disjointed visuals, including heads rotating independently of their bodies.
The study acknowledged several limitations, including the exclusion of image-to-video generation and the focus solely on single-dancer prompts to avoid ambiguity in evaluating complex human movements. The researchers also opted not to tailor prompts specifically for each model, which could have potentially improved the results.
Moreover, the tests did not include generative models specifically designed for human motion, which are commonly utilized in animation and gaming. While these technical models may outperform consumer-facing options, they require specialized knowledge and substantial computational resources to operate.
The findings of this study underscore the current limitations of AI in accurately replicating human artistic expression through dance. Despite advancements in technology, the artistry, nuance, and emotional depth embodied by human dancers remain unparalleled. As the industry continues to evolve, the role of human dancers appears secure, reinforcing the belief that artistry cannot be wholly captured by algorithms.
This article was originally published by CalMatters and is republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature


















































