Anthropic has entered into a significant agreement with Google and Broadcom to secure multiple gigawatts of next-generation AI compute capacity, which is anticipated to become operational in 2027. This partnership underscores Anthropic’s strategy to scale its infrastructure in response to the growing demand for its Claude models.
The deal comes at a time when Anthropic is experiencing remarkable revenue growth. The company reported that its annualised revenue run rate (ARR) has surpassed $30 billion in 2026, a substantial increase from approximately $9 billion at the close of 2025. Furthermore, Anthropic has noted that it now serves more than 1,000 enterprise customers, each spending over $1 million annually, a figure that has doubled in less than two months.
This growth trajectory positions Anthropic ahead of its competitor OpenAI, which, according to a recent Reuters report, had crossed $25 billion in annualised revenue as of early 2026. The surge in reported revenues highlights the rapid evolution and intensifying competition within the enterprise AI market.
However, the soaring ARR figures across various AI firms have sparked scrutiny. Recent discussions concerning startups like Emergent have raised concerns about the methodologies used to calculate “run rate” revenue, particularly when metrics are based on short-term usage or token consumption, rather than long-term contracts. This complicates direct comparisons across companies.
As Anthropic expands its infrastructure, it is also adjusting its pricing models in response to rising operational costs. The company has started charging separately for tools like OpenClaw, attributing this change to the high compute demands associated with agent-based tasks. This shift indicates a move away from flat subscription fees as usage patterns become more resource-intensive.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” commented Krishna Rao, CFO of Anthropic.
The majority of the new compute capacity will be established within the United States, furthering Anthropic’s earlier commitment to invest $50 billion in AI infrastructure domestically. Currently, Anthropic employs a diverse mix of hardware platforms, including chips from Amazon Web Services, Google TPUs, and NVIDIA GPUs. This multi-platform strategy is designed to optimise performance and mitigate reliance on a single supplier.
Despite enhancing its collaboration with Google, Amazon continues to serve as Anthropic’s primary cloud and training partner, with ongoing joint efforts on Project Rainier. Notably, Anthropic’s Claude models are accessible across all three major cloud platforms, including Amazon Web Services, Google Cloud, and Microsoft Azure.
As the demand for AI solutions escalates, Anthropic’s strategic partnerships and infrastructure investments may play a pivotal role in shaping the future landscape of the AI industry, positioning the company for sustained growth amidst an increasingly competitive environment.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility



















































