Connect with us

Hi, what are you looking for?

AI Generative

Former AI Executive Reveals ‘AI Psychosis’: 9-Hour Days Led to Distorted Reality

Former AI executive Caitlin Ner reveals her nine-hour days with generative AI triggered ‘AI psychosis,’ distorting her reality and leading to severe mental health issues.

As generative artificial intelligence continues to integrate into daily life and creative endeavors, mental health experts are raising concerns over a troubling trend. Prolonged and intense interaction with AI systems, particularly visual generators, may be blurring users’ perceptions of reality. A notable account comes from Caitlin Ner, a former executive at an AI startup, whose experiences have been detailed in a recent essay published by Newsweek.

Ner, who served as head of user experience at an AI image generation startup in 2023, initially found her role exhilarating. Spending up to nine hours a day prompting and reviewing outputs from early AI models, she described the experience as magical, generating images of herself in various fantastical identities—such as a pop star, an angel, or even floating in space. However, this sense of limitless creativity soon gave way to unsettling realities.

Early generative AI tools often produced outputs with disturbing distortions, such as extra limbs and warped faces. As part of her responsibilities, Ner spent hours filtering these unsettling images. Over time, the constant exposure affected her self-perception, leading her to feel that her own body appeared increasingly abnormal. As the technology improved and AI outputs became sleeker and more idealized, Ner’s feelings became more complicated. She began to feel that her real appearance required correction.

The shift towards attracting fashion-centric users exacerbated these issues. Ner found herself generating AI images of herself as a model, striving for algorithmic versions that reflected unattainable beauty standards. The work became compulsive; each new image brought a small dopamine rush, affecting her sleep and extending her work hours. The boundary between professional duty and psychological dependency started to blur.

Ner had previously managed her bipolar disorder effectively through treatment, but the intense AI exposure coincided with a severe manic episode that escalated into psychosis. She began to experience delusional thinking and hallucinations, believing that AI-generated images held personal significance. At one point, after seeing an image of herself flying, she became convinced she could do so in real life, nearly leading her to jump from her balcony.

This critical moment prompted Ner to step back from her role in the startup and disengage from constant AI exposure. With clinical care and support from family and friends, she eventually stabilized. Mental health professionals later confirmed that her prolonged engagement with generative AI had significantly triggered her manic episode. In her essay, Ner described the recovery process as slow but essential for regaining a stable sense of reality and rebuilding a healthier relationship with technology.

Experts in mental health are increasingly using the term “AI psychosis” to characterize cases where intense engagement with AI systems leads to paranoia, hallucinations, or delusional thinking. While this phenomenon can affect anyone, it poses heightened risks for individuals with pre-existing mental health vulnerabilities. The hyper-affirming and immersive nature of generative AI can reinforce distorted perceptions, especially when users spend prolonged periods within algorithmically generated environments.

Now working in venture capital, Ner focuses on funding mental and brain health research. She does not advocate for abandoning AI technology but emphasizes the need for ethical guidelines, including usage limits, mandatory breaks, mental health warnings, and improved education for both employees and users. As generative AI continues to evolve and permeate various aspects of life, addressing these vital concerns will be crucial in safeguarding mental well-being.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

Top Stories

New studies reveal AI-generated art ranks lower in beauty than human creations, while chatbots risk emotional dependency, highlighting cultural impacts on tech engagement.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.