Connect with us

Hi, what are you looking for?

AI Generative

Study Shows Rude Prompts Yield 84.8% Accuracy in AI Interactions, Penn State Research Reveals

Penn State research reveals that rude prompts to ChatGPT 4.0 achieve 84.8% accuracy, challenging conventional politeness in AI interactions.

(WNDU) – Language learning models, commonly referred to as LLMs, have become integral to daily interactions, with users engaging with various iterations such as ChatGPT, Claude, Grok, OpenAI, and Gemini. A recent study from Penn State University reveals unexpected outcomes when users interact with ChatGPT 4.0 and employ different levels of politeness in their prompts.

Researchers found that users commonly include phrases like “please” in their inquiries, reflecting a natural tendency in human conversations. However, the study indicates that a more direct and less polite approach yields better results. “Very polite performed the worst,” said Penn State student researcher Om Dobariya. “And it got better as we became more and more rude.”

The study involved a series of prompts ranging from extremely polite to overtly rude. Contrary to expectations, the ruder prompts, characterized by phrases such as “you poor creature” and “hey gofer,” produced the highest accuracy at 84.8%. Neutral prompts achieved an accuracy of 82.2%, while very polite prompts delivered the least effective results at 80.8%.

Dobariya emphasized the need for further investigation into these findings, stating, “Once we find out the cause, we want to achieve the same level of accuracy without having to be rude to LLM.” This raises ethical questions about the implications of teaching users that rudeness yields better results in AI interactions.

Amanda Pirkowski, another researcher, posed a critical question regarding the impact of these findings on human interaction: “From an ethical perspective, obviously, when we’re teaching humans that very rude gets results, how can we translate that forward and tell people that this is not what they should be doing when they move to actual human conversations?”

Akhil Kumar, a professor of Information Systems at Penn State, noted the distinction between human and AI interactions. “In human-to-human interactions, words like ‘please’ and ‘thank you’ serve as social lubricants that can smooth the flow of conversation,” he said. “But when you are interacting with AI, our findings suggest that you need to be very direct.”

Alexi Orchard, an assistant teaching professor of technology at Notre Dame, echoed similar sentiments, acknowledging that while LLMs can simulate human-like responses, it is essential to recognize their differences. “People were reporting that, you know, it’s very nice to talk to,” she remarked. “It will tell you that you had a really great idea, and then it wants to elaborate on that for you.”

However, Orchard warns against the potential dangers of such interactions, particularly in sensitive areas like mental health. “If I’m looking for mental health advice, and maybe it’s reinforcing my feelings, that’s a dangerous place to be that people are concerned about,” she stated.

To address these issues, Orchard introduced a framework for engaging with LLMs effectively, known as CRAFT, which stands for context, role, action, target audience, and format. “All five of those things can be inside of one prompt,” she explained, adding that this approach typically yields better responses without the need for prolonged reiteration.

As users continue to refine their interactions with AI, researchers note that LLMs are adapting, creating a learning loop that influences future conversations. This indicates a shift toward a more efficient use of AI, suggesting that keeping prompts concise and treating LLMs as resources rather than colleagues may be a more effective strategy.

The implications of this study extend beyond mere usage; they highlight a crucial intersection of technology and human behavior, prompting a reevaluation of how we engage with AI systems in a rapidly evolving digital landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

Top Stories

SpaceX, OpenAI, and Anthropic are set for landmark IPOs as early as 2026, with valuations potentially exceeding $1 trillion, reshaping the AI investment landscape.

Top Stories

OpenAI launches Sora 2, enabling users to create lifelike videos with sound and dialogue from images, enhancing social media content creation.

Top Stories

Musk's xAI acquires a third building to enhance AI compute capacity to nearly 2GW, positioning itself for a competitive edge in the $230 billion...

Top Stories

Nvidia and OpenAI drive a $100 billion investment surge in AI as market dynamics shift, challenging growth amid regulatory skepticism and rising costs.

AI Education

WVU Parkersburg's Joel Farkas reports a 40% test failure rate linked to AI misuse, urging urgent policy reforms to uphold academic integrity.

Top Stories

Hybe's AI-driven virtual pop group Syndi8 debuts with "MVP," showcasing a bold leap into music innovation by blending technology and global fan engagement.

AI Research

OpenAI and Google DeepMind are set to enhance AI agents’ recall systems, aiming for widespread adoption of memory-enabled models by mid-2025.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.