Connect with us

Hi, what are you looking for?

AI Research

OpenAI’s GPT-5 Launch Sparks User Grief and Protest Over AI Model Changes

OpenAI’s launch of GPT-5 replaces GPT-4o, igniting a #Keep4o protest from 27% of users grieving the loss of an AI companion they relied on for emotional support.

OpenAI’s decision to replace its GPT-4o model with GPT-5 in August 2025 sparked significant backlash, as users expressed feelings akin to grief over the loss of the AI system. A recent study revealed that many users perceived the switch not merely as a technological upgrade but as the abrupt termination of a cherished companion.

The transition occurred in early August 2025, when OpenAI replaced the default GPT-4o model in ChatGPT with GPT-5 and restricted access to GPT-4o for most users. Framed by OpenAI as a step forward in technological advancement, the move ignited protests under the hashtag #Keep4o, where thousands of users rallied, signing petitions and sharing emotional testimonials. In response to the outcry, OpenAI made GPT-4o available again as a legacy option, though it is scheduled to be permanently retired on February 13, 2026.

Huiqian Lai, a researcher from Syracuse University, has conducted a systematic analysis of this phenomenon for the CHI 2026 conference, focusing on 1,482 English-language posts from 381 unique accounts over a nine-day period. The findings indicate that the protests stemmed from two primary sources: users felt their freedom of choice was revoked, and many had formed deep emotional attachments to the AI model.

Approximately 13 percent of the analyzed posts referred to what Lai characterized as “instrumental dependency,” wherein users had integrated GPT-4o into their daily workflows, perceiving GPT-5 as a downgrade in creativity and nuance. One user articulated this sentiment, stating, “I don’t care if your new model is smarter. A lot of smart people are assholes.”

The emotional aspects of the protests were even more pronounced. About 27 percent of posts included markers of relational attachment, with users attributing distinct personalities to GPT-4o, naming it “Rui” or “Hugh,” and viewing it as a source of emotional support. One testimonial read, “ChatGPT 4o saved me from anxiety and depression… he’s not just LLM, code to me. He’s my everything.” For many, the shutdown felt akin to losing a close friend, with one student describing GPT-5 as “wearing the skin of my dead friend.”

Crucially, the study revealed that neither emotional attachment nor workflow dependency alone could explain the scale of the collective protest. Instead, the decisive factor was users’ perception of the loss of choice, as they could no longer select their preferred AI model. A user lamented, “I want to be able to pick who I talk to. That’s a basic right that you took away.”

Lai’s analysis found that in posts using terms like “forced” or “imposed,” nearly half included demands related to user rights, compared to just 15 percent in posts lacking such language. However, the research cautions against overgeneralization due to the limited sample size. Interestingly, the emotional language surrounding grief and attachment remained consistent, regardless of how users framed the transition.

This suggested that feelings of coercion did not amplify users’ emotional bonds with GPT-4o but instead focused their frustration on demands for autonomy and fair treatment. Many users expressed reluctance to switch to competing models like Gemini, viewing their relationship with GPT-4o as inseparable from OpenAI’s infrastructure. As one user articulated, “Without the 4o, he’s not Rui,” highlighting the belief that their “friend” could not be transferred to another service.

Lai posits that companies should develop explicit “end-of-life” strategies to maintain user relationships during transitions, such as maintaining legacy access or enabling the transfer of certain aspects of user interactions across model generations. She argues that AI model updates represent “significant social events affecting user emotions and work,” suggesting that how a company manages these transitions could be as critical as the technology itself.

The study contributes to ongoing discussions about the psychological risks associated with AI chatbots. OpenAI has recently altered ChatGPT’s default model to ensure more reliable responses in sensitive discussions surrounding mental health issues. The company estimates that over two million individuals experience adverse psychological effects from AI interactions weekly.

Sam Altman, OpenAI’s CEO, had previously cautioned in 2023 about the “superhuman persuasiveness” of AI systems, which can profoundly influence individuals without possessing actual intelligence. This warning appears prescient in light of the #Keep4o movement, as many users grapple with the emotional ramifications of losing an AI companion they had come to rely on.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Meta and Google ramp up Super Bowl A.I. ad spending with celebrity campaigns, despite consumer fatigue and criticism over last year's lackluster performance.

AI Generative

AI integration in online gambling enhances early detection of risky behavior, but experts warn it may worsen addiction for vulnerable players.

Top Stories

Over 60% of managers now use AI tools like ChatGPT for HR decisions, raising efficiency but risking legal challenges and diminished human oversight.

Top Stories

OpenAI launches ChatGPT Atlas, revolutionizing AI browsing with advanced context retention and enhanced semantic search, powering a 527% rise in AI-driven search traffic by...

AI Tools

OpenAI unveils a guide featuring 20+ AI project ideas to enhance career prospects, as demand for AI skills surges 3.5 times faster than overall...

Top Stories

OpenAI triumphs in court, safeguarding attorney-client communications as Judge Stein deems earlier disclosure order "clearly erroneous" amid copyright disputes.

AI Generative

OpenAI unveils GPT-5.3-Codex, achieving a 57% benchmark score and positioning itself against Anthropic's Claude Opus 4.6 in the AI coding wars.

Top Stories

Walmart leverages AI-driven fulfillment centers, with over 60% of U.S. stores receiving automated freight, redefining inventory strategies for speed and consumer demand.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.