OpenAI’s decision to replace its GPT-4o model with GPT-5 in August 2025 sparked significant backlash, as users expressed feelings akin to grief over the loss of the AI system. A recent study revealed that many users perceived the switch not merely as a technological upgrade but as the abrupt termination of a cherished companion.
The transition occurred in early August 2025, when OpenAI replaced the default GPT-4o model in ChatGPT with GPT-5 and restricted access to GPT-4o for most users. Framed by OpenAI as a step forward in technological advancement, the move ignited protests under the hashtag #Keep4o, where thousands of users rallied, signing petitions and sharing emotional testimonials. In response to the outcry, OpenAI made GPT-4o available again as a legacy option, though it is scheduled to be permanently retired on February 13, 2026.
Huiqian Lai, a researcher from Syracuse University, has conducted a systematic analysis of this phenomenon for the CHI 2026 conference, focusing on 1,482 English-language posts from 381 unique accounts over a nine-day period. The findings indicate that the protests stemmed from two primary sources: users felt their freedom of choice was revoked, and many had formed deep emotional attachments to the AI model.
Approximately 13 percent of the analyzed posts referred to what Lai characterized as “instrumental dependency,” wherein users had integrated GPT-4o into their daily workflows, perceiving GPT-5 as a downgrade in creativity and nuance. One user articulated this sentiment, stating, “I don’t care if your new model is smarter. A lot of smart people are assholes.”
The emotional aspects of the protests were even more pronounced. About 27 percent of posts included markers of relational attachment, with users attributing distinct personalities to GPT-4o, naming it “Rui” or “Hugh,” and viewing it as a source of emotional support. One testimonial read, “ChatGPT 4o saved me from anxiety and depression… he’s not just LLM, code to me. He’s my everything.” For many, the shutdown felt akin to losing a close friend, with one student describing GPT-5 as “wearing the skin of my dead friend.”
Crucially, the study revealed that neither emotional attachment nor workflow dependency alone could explain the scale of the collective protest. Instead, the decisive factor was users’ perception of the loss of choice, as they could no longer select their preferred AI model. A user lamented, “I want to be able to pick who I talk to. That’s a basic right that you took away.”
Lai’s analysis found that in posts using terms like “forced” or “imposed,” nearly half included demands related to user rights, compared to just 15 percent in posts lacking such language. However, the research cautions against overgeneralization due to the limited sample size. Interestingly, the emotional language surrounding grief and attachment remained consistent, regardless of how users framed the transition.
This suggested that feelings of coercion did not amplify users’ emotional bonds with GPT-4o but instead focused their frustration on demands for autonomy and fair treatment. Many users expressed reluctance to switch to competing models like Gemini, viewing their relationship with GPT-4o as inseparable from OpenAI’s infrastructure. As one user articulated, “Without the 4o, he’s not Rui,” highlighting the belief that their “friend” could not be transferred to another service.
Lai posits that companies should develop explicit “end-of-life” strategies to maintain user relationships during transitions, such as maintaining legacy access or enabling the transfer of certain aspects of user interactions across model generations. She argues that AI model updates represent “significant social events affecting user emotions and work,” suggesting that how a company manages these transitions could be as critical as the technology itself.
The study contributes to ongoing discussions about the psychological risks associated with AI chatbots. OpenAI has recently altered ChatGPT’s default model to ensure more reliable responses in sensitive discussions surrounding mental health issues. The company estimates that over two million individuals experience adverse psychological effects from AI interactions weekly.
Sam Altman, OpenAI’s CEO, had previously cautioned in 2023 about the “superhuman persuasiveness” of AI systems, which can profoundly influence individuals without possessing actual intelligence. This warning appears prescient in light of the #Keep4o movement, as many users grapple with the emotional ramifications of losing an AI companion they had come to rely on.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions





















































