Researchers from Iowa State University have highlighted the potential pitfalls of using human-like language to describe artificial intelligence (AI) systems, warning that such terminology can obscure the true capabilities of these technologies. In their study, “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT,” published in Technical Communication Quarterly, the team examined how writers employ mental verbs—terms typically used to describe human cognition—when referring to AI.
The research team, which includes Jo Mackiewicz, a professor of English, Jeanine Aune, a teaching professor of English, Matthew J. Baker, an associate professor of linguistics at Brigham Young University, and Jordan Smith, an assistant professor at the University of Northern Colorado, aims to clarify the implications of anthropomorphism in the context of AI. Mackiewicz noted, “We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines — it helps us relate to them.” However, she cautioned that this can blur the line between human and machine capabilities.
Language such as “think,” “know,” “understand,” and “want” can unintentionally suggest that AI possesses intentions or awareness, which it does not. Instead, these systems generate responses based on data patterns, lacking any form of belief or understanding. Aune further explained that phrases like “AI decided” or “ChatGPT knows” may lead to inflated expectations regarding AI’s independence and intelligence, potentially misleading the public about what these technologies can achieve.
The researchers conducted a detailed analysis of the News on the Web (NOW) corpus, which encompasses over 20 billion words from English-language news articles across 20 countries. They focused on the frequency of mental verbs paired with AI-related terms. Contrary to expectations, the study found that such anthropomorphic language is less prevalent in news writing than in everyday conversation. For instance, the word “needs” appeared 661 times in conjunction with AI but was used much less frequently with ChatGPT, where “knows” was mentioned only 32 times.
This disparity may reflect editorial standards, such as those from the Associated Press, which caution against attributing human traits to AI. The researchers observed that even when mental verbs were employed, they often did not imply human-like qualities. For example, “AI needs large amounts of data” communicates a requirement without suggesting consciousness or desire. In other contexts, the use of “needs” reinforces human responsibility, such as in phrases like “AI needs to be trained.”
Anthropomorphism, the tendency to attribute human traits to non-human entities, exists on a spectrum, according to the study. While some language is straightforward, suggesting basic operational requirements, other phrases can imply deeper, more human-like expectations. Aune noted that statements like “AI needs to understand the real world” suggest a level of reasoning that transcends mere description.
The findings underscore the complexity of language surrounding AI and its impact on public perception. Mackiewicz emphasized, “The language we choose shapes how readers understand AI systems, their capabilities, and the humans responsible for them.” This suggests that writers and communicators must be deliberate in their word choices to avoid fostering misconceptions about AI.
As AI technologies continue to evolve, the way they are discussed will remain critical. The research team believes these insights can prompt professionals to reflect on their own language use regarding AI. Mackiewicz and Aune expressed hope that future studies could delve into the effects of different word choices on public understanding and the influence of even rare anthropomorphic phrases on perceptions of AI.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions

















































