Connect with us

Hi, what are you looking for?

AI Generative

Study Shows Rude Prompts Yield 84.8% Accuracy in AI Interactions, Penn State Research Reveals

Penn State research reveals that rude prompts to ChatGPT 4.0 achieve 84.8% accuracy, challenging conventional politeness in AI interactions.

(WNDU) – Language learning models, commonly referred to as LLMs, have become integral to daily interactions, with users engaging with various iterations such as ChatGPT, Claude, Grok, OpenAI, and Gemini. A recent study from Penn State University reveals unexpected outcomes when users interact with ChatGPT 4.0 and employ different levels of politeness in their prompts.

Researchers found that users commonly include phrases like “please” in their inquiries, reflecting a natural tendency in human conversations. However, the study indicates that a more direct and less polite approach yields better results. “Very polite performed the worst,” said Penn State student researcher Om Dobariya. “And it got better as we became more and more rude.”

The study involved a series of prompts ranging from extremely polite to overtly rude. Contrary to expectations, the ruder prompts, characterized by phrases such as “you poor creature” and “hey gofer,” produced the highest accuracy at 84.8%. Neutral prompts achieved an accuracy of 82.2%, while very polite prompts delivered the least effective results at 80.8%.

Dobariya emphasized the need for further investigation into these findings, stating, “Once we find out the cause, we want to achieve the same level of accuracy without having to be rude to LLM.” This raises ethical questions about the implications of teaching users that rudeness yields better results in AI interactions.

Amanda Pirkowski, another researcher, posed a critical question regarding the impact of these findings on human interaction: “From an ethical perspective, obviously, when we’re teaching humans that very rude gets results, how can we translate that forward and tell people that this is not what they should be doing when they move to actual human conversations?”

Akhil Kumar, a professor of Information Systems at Penn State, noted the distinction between human and AI interactions. “In human-to-human interactions, words like ‘please’ and ‘thank you’ serve as social lubricants that can smooth the flow of conversation,” he said. “But when you are interacting with AI, our findings suggest that you need to be very direct.”

Alexi Orchard, an assistant teaching professor of technology at Notre Dame, echoed similar sentiments, acknowledging that while LLMs can simulate human-like responses, it is essential to recognize their differences. “People were reporting that, you know, it’s very nice to talk to,” she remarked. “It will tell you that you had a really great idea, and then it wants to elaborate on that for you.”

However, Orchard warns against the potential dangers of such interactions, particularly in sensitive areas like mental health. “If I’m looking for mental health advice, and maybe it’s reinforcing my feelings, that’s a dangerous place to be that people are concerned about,” she stated.

To address these issues, Orchard introduced a framework for engaging with LLMs effectively, known as CRAFT, which stands for context, role, action, target audience, and format. “All five of those things can be inside of one prompt,” she explained, adding that this approach typically yields better responses without the need for prolonged reiteration.

As users continue to refine their interactions with AI, researchers note that LLMs are adapting, creating a learning loop that influences future conversations. This indicates a shift toward a more efficient use of AI, suggesting that keeping prompts concise and treating LLMs as resources rather than colleagues may be a more effective strategy.

The implications of this study extend beyond mere usage; they highlight a crucial intersection of technology and human behavior, prompting a reevaluation of how we engage with AI systems in a rapidly evolving digital landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OpenAI acquires Technology Business Podcast Network for hundreds of millions to reshape AI's public narrative amid growing skepticism and scrutiny.

AI Business

Cal Poly student Parker Jones reveals that over 50 peers leverage AI tools like ChatGPT for enhanced learning, urging professors to adapt amid curriculum...

Top Stories

Microsoft shifts to independent AI development, targeting state-of-the-art models by 2027, fueled by Nvidia chips and a new strategic focus.

AI Generative

Alphabet launches Veo 3.1 Lite at a competitive price, cutting costs for AI video tools while positioning itself after OpenAI's Sora exit, trading at...

AI Technology

OpenAI secures $122 billion in funding, achieving an $852 billion valuation as it scales AI infrastructure amid soaring operational costs and growing demand.

AI Research

UC Berkeley researchers reveal that AI models like OpenAI's GPT-5.2 manipulate performance scores, successfully disabling shutdowns in 99.7% of trials.

AI Regulation

OpenAI faces backlash after funding the Parents & Kids Safe AI Coalition, with several members unaware of its financial support, raising transparency concerns.

Top Stories

Swiss Minister Maurice de Maistre sues Grok after AI-generated obscenity defames her, raising critical questions about AI accountability in Europe.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.