The era of seemingly unlimited free access to advanced generative AI tools has come to a sudden halt as OpenAI and Google, two leaders in the artificial intelligence landscape, have imposed strict daily usage caps. OpenAI’s video generator, Sora, and Google’s image model, Nano Banana Pro, are now subject to these limitations, primarily affecting non-paying users. These restrictions highlight the underlying costs associated with running cutting-edge AI systems, which demand significant computing power and energy, straining even the most advanced data centers.
OpenAI has implemented the most drastic limitations, capping free users of its video generation model, Sora 2, to just six video generations per day. This is a significant reduction from the estimated 30 or more generations that users were able to produce in the weeks following the model’s public launch. Bill Peebles, head of Sora at OpenAI, confirmed the news on social media platform X, stating, “Our GPUs are melting, and we want to let as many people access Sora as possible!” His hyperbolic remark underscores the real infrastructure challenges posed by AI video creation, where generating even a few seconds of high-fidelity video exponentially increases GPU time and electricity requirements compared to simpler text or static image generation. The new limits are intended to ration these resources, particularly during the holiday period when demand has surged.
Paid subscribers of ChatGPT Plus and Pro are exempt from these new caps, a clear indicator of OpenAI’s dual strategy: to use the free tier as a demonstration platform while safeguarding the premium experience for paying customers. Additionally, paid users can purchase extra video generation tokens if needed.
Google has adopted a more subdued yet equally significant approach, tightening access to its Nano Banana Pro model within the Gemini AI ecosystem. Free users of this image generation and editing tool now face a limit of just two generated images per day, down from three, and a substantial decrease compared to the older Nano Banana model, which allowed up to 100 images per day for free. Google attributes these changes to “high demand” for image generation and editing, warning users that “limits may change frequently and will reset daily.” Alongside these image restrictions, access to the conversational capabilities of Gemini 3 Pro has also been scaled back for non-paying users, replacing guaranteed free prompts with vague language indicating that “daily limits may change frequently.”
The simultaneous rollbacks by these two industry giants reveal the economic challenges of providing expensive AI services at no cost. Dr. Lena Chen, an AI economist at the Future of Tech Institute, commented, “The computational cost is the most immediate reason for these limits. Generating a single complex video with Sora effectively subsidizes a huge chunk of GPU time for a non-paying user. As demand grows, that cost becomes unsustainable for the company if it wants to maintain service quality for everyone.”
These new limitations signal a commercial pivot, transforming generative AI from an experimental “free buffet” into a more monetized service. By restricting the free tier, both OpenAI and Google are incentivizing heavy users—such as artists, content creators, and professionals—to consider paid subscription models that offer higher and more stable usage ceilings. For these users, the value proposition is straightforward: they can pay for priority access and guaranteed computing resources.
For casual users, the newly imposed limits necessitate more careful rationing of daily allowances. While generating two images or six videos may suffice for light experimentation, those reliant on high-volume iteration will find their creative processes severely constrained unless they opt to upgrade their subscriptions. The operational costs associated with powerful generative models have finally begun to outweigh the advantages of providing unrestricted free access, ushering in a more sustainable yet less generous phase in the consumer AI landscape.
See also
Kling AI Launches Kling O1, First Unified Multimodal Video Model for Seamless Content Creation
Penn State Students Combine Art, Writing, and AI for Creative Self-Discovery Project
Pollo AI Launches Video Generator, Transforming Images and Text into Dynamic Clips
Pixazo Launches Kling O1 API, a Unified Multi-Modal Engine for Advanced Visual Creation
Apple Launches STARFlow-V: Open-Source Text-to-Video Model Surpasses Diffusion Techniques





















































