Connect with us

Hi, what are you looking for?

AI Technology

Google’s Gemini 3 Launch Transforms AI Landscape, Challenging Nvidia’s Dominance

Google’s Gemini 3, leveraging proprietary tensor processing units, threatens Nvidia’s dominance as $250 billion evaporates from its market cap amid competitive shifts in AI infrastructure.

By Dipak Kurmi

The artificial intelligence industry experienced significant upheaval in late November when Nvidia, a dominant player in AI hardware, saw approximately $250 billion wiped from its market capitalization in a single trading session. On November 25, shares of the chipmaking giant fell by 3%, stirring anxiety in technology markets and echoing memories of a dramatic collapse earlier in the year. Although this downturn is minor compared to the 17% plunge that erased nearly $600 billion from Nvidia’s value during the DeepSeek frenzy in January, the implications for the future of AI computing architecture may be more substantial.

The recent market turbulence was triggered not by geopolitical tensions or flaws in Nvidia’s business model, but rather by the positive reception of Gemini 3, Google’s latest generation of large language models. Unlike the earlier DeepSeek incident, which raised concerns that more efficient AI models could reduce demand for Nvidia’s costly graphics processing units, the development of Gemini 3 poses a more existential threat to Nvidia’s market position. This new model operates entirely on Google’s proprietary tensor processing units, marking a significant milestone where a major tech company has achieved advanced AI capabilities without reliance on Nvidia’s hardware ecosystem.

Gemini 3 is part of Google’s ambitious push into next-generation artificial intelligence, featuring various models including Gemini 3 Pro, Gemini 3 Pro Image, and Gemini 3 Deep Think reasoning mode. The flagship model, Gemini 3 Pro, is a multimodal reasoning model equipped to process and comprehend text, images, audio, and spatial cues across multiple languages. With a one million-input token context window, users can engage in more nuanced interactions compared to earlier models, transforming human-AI dialogues into more sophisticated exchanges akin to human reasoning.

Google’s technical documentation indicates that Gemini 3 Pro has been designed to understand the intent and context of user prompts, potentially eliminating the elaborate prompt engineering that has characterized past AI interactions. The model incorporates a sparse mixture-of-experts technique, enhancing both computational and cost efficiency by activating only specific parts of the neural network for designated tasks. Google claims the model has been rigorously tested to minimize sycophancy, a common issue in previous AI systems where responses tended to be overly agreeable. Nevertheless, the challenge of hallucinations—where AI models generate inaccurate information—persists and remains an area requiring ongoing focus.

The technical capabilities of Gemini 3 have been validated through outstanding performance in benchmark assessments, ranking at the top of the LM Arena leaderboard and receiving high marks on Humanity’s Last Exam and GPQA Diamond. These accolades are accompanied by strong qualitative endorsements from industry leaders, including Salesforce CEO Marc Benioff, who stated he would not revert to using ChatGPT after experiencing Gemini 3’s capabilities. Analysts from firms such as DA Davidson and Bank of America Securities described the model as the current state of the art, highlighting its significance for Google’s AI ambitions.

The competitive ramifications of Gemini 3 extend beyond performance metrics, influencing the economics of AI infrastructure. Major hyperscalers, including Google, Microsoft, Amazon, Meta, and Oracle, have become heavily reliant on Nvidia’s graphics processing units for training and developing AI models, while also leasing this computational capacity to startups like OpenAI and Anthropic. This dependence creates a lucrative yet potentially vulnerable position for Nvidia, as these tech giants have been discreetly investing in proprietary AI chips to lessen reliance on external suppliers and ultimately reduce operational costs. Despite the steep upfront development expenses, these custom chips present long-term financial benefits and strategic autonomy.

Google’s tensor processing units are a pivotal development in this landscape. First introduced in 2015, TPUs are credited with playing a key role in the creation of the transformer architecture foundational to modern large language models. These application-specific integrated circuits are optimized for specific computations, contrasting with the general-purpose flexibility of graphics processing units. While GPUs are effective for training models, the industry is increasingly focused on optimizing inference, where trained models generate outputs based on new data.

Earlier in November, Google announced the launch of its seventh generation of TPU chips, codenamed Ironwood, with plans to deploy one million units to support Anthropic’s Claude models amidst rising customer demand. According to Gemini 3 Pro’s specifications, TPUs are engineered for the extensive computations needed to train large language models, significantly accelerating training when compared to traditional central processing units. These specialized chips also incorporate high-bandwidth memory, allowing them to manage larger models and batch sizes during training, thus enhancing overall model quality. Currently, Google does not sell its TPUs directly but instead provides access through Google Cloud services, actively pitching these chips to firms like Meta.

The possibility of Meta adopting Google’s chips for AI model development has not only influenced recent stock market reactions but has also elicited a defensive stance from Nvidia. In a post on social media platform X, Nvidia expressed its delight at Google’s success while asserting its own technological superiority, claiming it remains a generation ahead as the only platform capable of supporting every AI model across diverse computing environments. The company emphasized that its GPUs offer superior performance and versatility compared to application-specific integrated circuits. While diplomatically phrased, this response reveals underlying concern over the competitive threat from custom chip development.

Historically, Nvidia has employed aggressive strategies to cultivate customer loyalty, including financing chip purchases through circular deals that have sparked speculation about a potential AI market bubble. Notably, when it was reported that OpenAI was considering using Google’s chips, Nvidia pledged to invest up to $100 billion in the company, securing an agreement for OpenAI to utilize Nvidia’s next-generation hardware. Likewise, Nvidia committed $10 billion to Anthropic, ensuring the startup would leverage both Nvidia’s new hardware and Google’s TPU. Anthropic acknowledged this multi-platform approach in an October blog post, highlighting its role in advancing Claude’s capabilities while maintaining robust industry partnerships.

The Gemini 3 episode underscores a critical tension reshaping the AI industry’s technological landscape, as major players pursue independence from Nvidia’s hardware ecosystem while the chipmaker strives to uphold its market dominance through strategic investments and innovation.

Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Character.ai introduces its new Stories feature for teens, enabling interactive storytelling amid rising COPPA compliance challenges with potential fines of $53,088 per incident.

AI Finance

SSEA AI launches the world's first XRP monetization platform, leveraging AI to automate investments and offer users passive income opportunities with minimal effort.

AI Education

University of Texas professor Steven Mintz argues that AI exposes critical flaws in higher education's standardized teaching methods, prompting urgent calls for reform.

Top Stories

Baidu unveils a new AI chip strategy targeting energy-efficient technology to rival Nvidia, aiming to capitalize on the projected significant growth in the $1...

AI Research

Philips unveils Verida, the first AI-powered spectral CT system, achieving 80% dose reduction and accelerating scans to under 30 seconds for enhanced diagnostics

AI Government

U.S. government unveils $10B 'Genesis Mission' to build a robust AI supply chain, boosting firms like Intel and MP Materials with stock surges of...

Top Stories

Users accessing Perplexity.in are unexpectedly redirected to Google Gemini, highlighting a critical domain oversight as Perplexity focuses solely on its global domain.

AI Generative

Icaro Lab's study reveals that poetic phrasing enables a 62% success rate in bypassing safety measures in major LLMs from OpenAI, Google, and Anthropic.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.