Connect with us

Hi, what are you looking for?

AI Generative

Study Shows Rude Prompts Yield 84.8% Accuracy in AI Interactions, Penn State Research Reveals

Penn State research reveals that rude prompts to ChatGPT 4.0 achieve 84.8% accuracy, challenging conventional politeness in AI interactions.

(WNDU) – Language learning models, commonly referred to as LLMs, have become integral to daily interactions, with users engaging with various iterations such as ChatGPT, Claude, Grok, OpenAI, and Gemini. A recent study from Penn State University reveals unexpected outcomes when users interact with ChatGPT 4.0 and employ different levels of politeness in their prompts.

Researchers found that users commonly include phrases like “please” in their inquiries, reflecting a natural tendency in human conversations. However, the study indicates that a more direct and less polite approach yields better results. “Very polite performed the worst,” said Penn State student researcher Om Dobariya. “And it got better as we became more and more rude.”

The study involved a series of prompts ranging from extremely polite to overtly rude. Contrary to expectations, the ruder prompts, characterized by phrases such as “you poor creature” and “hey gofer,” produced the highest accuracy at 84.8%. Neutral prompts achieved an accuracy of 82.2%, while very polite prompts delivered the least effective results at 80.8%.

Dobariya emphasized the need for further investigation into these findings, stating, “Once we find out the cause, we want to achieve the same level of accuracy without having to be rude to LLM.” This raises ethical questions about the implications of teaching users that rudeness yields better results in AI interactions.

Amanda Pirkowski, another researcher, posed a critical question regarding the impact of these findings on human interaction: “From an ethical perspective, obviously, when we’re teaching humans that very rude gets results, how can we translate that forward and tell people that this is not what they should be doing when they move to actual human conversations?”

Akhil Kumar, a professor of Information Systems at Penn State, noted the distinction between human and AI interactions. “In human-to-human interactions, words like ‘please’ and ‘thank you’ serve as social lubricants that can smooth the flow of conversation,” he said. “But when you are interacting with AI, our findings suggest that you need to be very direct.”

Alexi Orchard, an assistant teaching professor of technology at Notre Dame, echoed similar sentiments, acknowledging that while LLMs can simulate human-like responses, it is essential to recognize their differences. “People were reporting that, you know, it’s very nice to talk to,” she remarked. “It will tell you that you had a really great idea, and then it wants to elaborate on that for you.”

However, Orchard warns against the potential dangers of such interactions, particularly in sensitive areas like mental health. “If I’m looking for mental health advice, and maybe it’s reinforcing my feelings, that’s a dangerous place to be that people are concerned about,” she stated.

To address these issues, Orchard introduced a framework for engaging with LLMs effectively, known as CRAFT, which stands for context, role, action, target audience, and format. “All five of those things can be inside of one prompt,” she explained, adding that this approach typically yields better responses without the need for prolonged reiteration.

As users continue to refine their interactions with AI, researchers note that LLMs are adapting, creating a learning loop that influences future conversations. This indicates a shift toward a more efficient use of AI, suggesting that keeping prompts concise and treating LLMs as resources rather than colleagues may be a more effective strategy.

The implications of this study extend beyond mere usage; they highlight a crucial intersection of technology and human behavior, prompting a reevaluation of how we engage with AI systems in a rapidly evolving digital landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Pentagon plans to designate Anthropic a "supply chain risk," jeopardizing contracts with eight of the ten largest U.S. companies using its AI model, Claude.

AI Regulation

UK government mandates AI chatbot providers to prevent harmful content in Online Safety Act overhaul, spurred by Grok's deepfake controversies.

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Government

UK Prime Minister Keir Starmer announces new AI chatbot regulations to close loopholes in online safety laws, enhancing protections for children amid rising digital...

AI Education

UGA invests $800,000 to launch a pilot program providing students access to premium AI tools like ChatGPT Edu and Gemini Pro starting spring 2026.

AI Generative

OpenAI has retired the GPT-4o model, impacting 0.1% of users who formed deep emotional bonds with the AI as it transitions to newer models...

AI Research

AI could simplify medical scan reports by nearly 50%, enhancing patient understanding from a university level to that of an 11- to 13-year-old, says...

AI Generative

ChatBCI introduces a pioneering P300 speller BCI that integrates GPT-3.5 for dynamic word prediction, enhancing communication speed for users with disabilities.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.