The artificial-intelligence trade is showing resilience as U.S. companies continue to invest heavily in the development of increasingly capable models. However, competition from Chinese AI models is poised to escalate, with expectations that they could capture a larger share of the market by 2026, posing challenges for U.S. industry leaders such as Alphabet’s Google, OpenAI, and Anthropic.
In January 2025, investors misinterpreted **DeepSeek**’s reported $5.6 million training cost as an indication that the expensive **Nvidia** chips may be becoming obsolete and that large U.S. tech firms had overextended themselves in AI investments. This misjudgment was followed by an illustration of **Jevons’ Paradox**, where more efficient and capable AI models significantly increased demand for AI services, underscoring the necessity for extensive data-center expansions by firms like **Microsoft**, **Amazon.com**, and Google.
However, a crucial element may be underestimated in the marketplace: DeepSeek and its domestic counterparts often utilize “open source” or “open weight” models, which are free for users to download, modify, and employ at low costs. This contrasts sharply with U.S. companies, which typically maintain strict control over their models and impose higher subscription fees.
While American models, including Google’s **Gemini**, Anthropic’s **Claude**, and OpenAI’s **GPT**, excel in complex reasoning tasks, Chinese open-source models have secured approximately 30% of the “working” market, particularly in programming and roleplay applications, where cost efficiency and flexibility are key. This observation comes from a report by **OpenRouter**, an AI model marketplace.
Opening Up
China’s open-source strategy accelerates AI innovation by allowing users to review and enhance models, differing markedly from the more closed approach taken by many U.S. firms. “China seems to be taking a different strategy where if you open the weights and let this diffuse across society and allow it to accelerate research, that can help compensate for the fact that you may not be able to compete directly with companies like OpenAI or Anthropic,” stated **Kyle Miller**, a research analyst at Georgetown’s Center for Security and Emerging Technology.
Estimates regarding the lag between Chinese and U.S. AI capabilities vary, with many suggesting a timeframe of around seven months for Chinese developers to catch up with new American releases. This gap narrowed following the early release of DeepSeek’s R1 but has since widened again.
The U.S. maintains its lead thanks to substantial investments, with **Goldman Sachs Research** forecasting capital expenditures by AI hyperscalers—predominantly American—at around $400 billion in 2025 and exceeding $520 billion in 2026. In contrast, analysts from **UBS** estimate that the combined capital spending of China’s internet leaders was approximately $57 billion last year.
Sustaining this level of investment faces hurdles, particularly regarding power supply. New data-center designs require more than one gigawatt of power, equivalent to a nuclear reactor’s output. China now produces over twice as much power as the U.S., and its centralized planning allows for more rapid energy allocation toward AI development compared to the decentralized American model.
“We continue to favour China’s approach to AI over that of the U.S.,” wrote **Christopher Woods**, global head of equity strategy at **Jefferies**, noting that the combination of open-source models and access to relatively cheap power renders China a formidable competitor. This insight highlights the relevance of DeepSeek’s moment from early last year, a point that appears to have been overlooked by U.S. markets.
Despite the competitive landscape, U.S. firms retain a key advantage: access to advanced AI chips from **Nvidia**. It has been reported that DeepSeek is working on the next iteration of its flagship model, expected around mid-February, coinciding with China’s **Lunar New Year**. After testing chips from **Huawei Technologies** and other local vendors, DeepSeek reportedly found their performance lacking and turned to Nvidia GPUs for some of its training.
Chinese firms, however, are finding ways to push forward despite chip limitations. This month, DeepSeek published research demonstrating a method for training larger models using fewer chips through improved memory design. “We view DeepSeek’s architecture as a new, promising engineering solution that could enable continued model scaling without a proportional increase in GPU capacity,” remarked **Timothy Arcuri**, an analyst at UBS.
While export controls have not hindered Chinese companies from training advanced models, they face challenges in scaling their deployments. **Zhipu AI**, which launched its open-weight GLM 4.7 model in December, revealed it was rationing sales of its coding product after user demand overwhelmed its server capacity.
“I don’t see compute constraints limiting [Chinese companies’] ability to make models that are better and compete near the U.S. frontier,” Miller remarked, noting that deployment is where the compute constraints emerge. The potential for a shift in dynamics lies with President **Donald Trump’s** plan to permit Nvidia to sell its **H200** chips to China, which could lead to significant orders from companies like **Alibaba Group** and **ByteDance**, TikTok’s parent firm.
Access to these chips could empower Chinese laboratories to construct AI-training supercomputers on par with their American counterparts at a 50% higher cost, according to the **Institute for Progress**. Government subsidies in China could offset this differential, leveling the competitive landscape. The H200 chip is noted for its superior performance, boasting a processing power advantage of approximately 32% over **Huawei’s** Ascend 910C.
A combination of open-source innovation and the easing of chip controls could foster a more capable and cost-effective Chinese AI ecosystem. This emerges at a time when OpenAI and Anthropic are considering public listings, and U.S. hyperscalers like Microsoft and **Meta Platforms** face pressure to validate their substantial investments. Rather than heralding a new “DeepSeek moment,” the more significant threat may be a gradual realization that Chinese companies are strategically undercutting their American rivals, potentially triggering a reevaluation of U.S. technology stocks.
See also
U.S. AI Strategy Must Navigate Uncertainty to Secure Global Leadership Against China
Invest in AI’s Future: Equinix’s Stock Offers Solid Valuation Amid High Market Prices
CGI and OpenAI Launch Global AI Alliance to Enhance Enterprise Transformation
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
















































