(WNDU) – Language learning models, commonly referred to as LLMs, have become integral to daily interactions, with users engaging with various iterations such as ChatGPT, Claude, Grok, OpenAI, and Gemini. A recent study from Penn State University reveals unexpected outcomes when users interact with ChatGPT 4.0 and employ different levels of politeness in their prompts.
Researchers found that users commonly include phrases like “please” in their inquiries, reflecting a natural tendency in human conversations. However, the study indicates that a more direct and less polite approach yields better results. “Very polite performed the worst,” said Penn State student researcher Om Dobariya. “And it got better as we became more and more rude.”
The study involved a series of prompts ranging from extremely polite to overtly rude. Contrary to expectations, the ruder prompts, characterized by phrases such as “you poor creature” and “hey gofer,” produced the highest accuracy at 84.8%. Neutral prompts achieved an accuracy of 82.2%, while very polite prompts delivered the least effective results at 80.8%.
Dobariya emphasized the need for further investigation into these findings, stating, “Once we find out the cause, we want to achieve the same level of accuracy without having to be rude to LLM.” This raises ethical questions about the implications of teaching users that rudeness yields better results in AI interactions.
Amanda Pirkowski, another researcher, posed a critical question regarding the impact of these findings on human interaction: “From an ethical perspective, obviously, when we’re teaching humans that very rude gets results, how can we translate that forward and tell people that this is not what they should be doing when they move to actual human conversations?”
Akhil Kumar, a professor of Information Systems at Penn State, noted the distinction between human and AI interactions. “In human-to-human interactions, words like ‘please’ and ‘thank you’ serve as social lubricants that can smooth the flow of conversation,” he said. “But when you are interacting with AI, our findings suggest that you need to be very direct.”
Alexi Orchard, an assistant teaching professor of technology at Notre Dame, echoed similar sentiments, acknowledging that while LLMs can simulate human-like responses, it is essential to recognize their differences. “People were reporting that, you know, it’s very nice to talk to,” she remarked. “It will tell you that you had a really great idea, and then it wants to elaborate on that for you.”
However, Orchard warns against the potential dangers of such interactions, particularly in sensitive areas like mental health. “If I’m looking for mental health advice, and maybe it’s reinforcing my feelings, that’s a dangerous place to be that people are concerned about,” she stated.
To address these issues, Orchard introduced a framework for engaging with LLMs effectively, known as CRAFT, which stands for context, role, action, target audience, and format. “All five of those things can be inside of one prompt,” she explained, adding that this approach typically yields better responses without the need for prolonged reiteration.
As users continue to refine their interactions with AI, researchers note that LLMs are adapting, creating a learning loop that influences future conversations. This indicates a shift toward a more efficient use of AI, suggesting that keeping prompts concise and treating LLMs as resources rather than colleagues may be a more effective strategy.
The implications of this study extend beyond mere usage; they highlight a crucial intersection of technology and human behavior, prompting a reevaluation of how we engage with AI systems in a rapidly evolving digital landscape.
See also
Citrine Informatics Launches New AI Tools to Accelerate Sustainable Materials Development by 80%
OpenAI Licenses 200+ Disney Characters for Sora Videos in $1B Deal with Disney
Intel and Wipro Reveal Key Strategies for Scaling AI Beyond Proof-of-Concept
AI Accountability Act Introduced to Protect Copyrights and Data Rights in Tech Sector
Meta’s Galactica Processes 106B Tokens But Faces Backlash Over Fabricated Citations


















































