Meta Platforms (META) has reportedly entered into a multi-billion-dollar agreement with Google (GOOGL) to rent artificial intelligence chips, specifically Google’s Tensor Processing Units (TPUs). According to a Thursday report in The Information, the deal will enable Meta to train and operate its next-generation large language models (LLMs), reflecting the ongoing investment by tech firms in chip and data center infrastructure to support the increasing demands of AI workloads.
As the AI landscape evolves, Google is positioning its TPUs as a competitive alternative to Nvidia’s (NVDA) market-leading GPUs, which currently dominate AI application processing. The reported deal is a significant step for Meta as it seeks to enhance its AI capabilities amidst a broader industry trend of substantial financial commitments toward advanced computing resources. The arrangement signifies Google’s strategy to bolster its cloud services through TPU sales, which have become a crucial revenue stream.
In November, Google unveiled its latest TPU iteration, known as “Ironwood.” This advanced series allows users to scale a single AI server pod to connect up to 9,216 Ironwood TPUs, which are linked through high-speed interconnects offering bandwidth of up to 9.6 terabits per second. Furthermore, the Ironwood TPUs can interface with an impressive 1.77 petabytes of high-bandwidth memory, marking a substantial leap in processing power. Google claims the Ironwood series can deliver over 118 times the FP8 ExaFLOPS compared to its closest competitor, showcasing four times improved performance in both training and inference tasks compared to its previous TPU model, Trillium.
Meta’s agreement with Google follows its collaboration with Anthropic, another AI firm that has successfully utilized TPUs to scale its Claude models significantly. Anthropic’s partnership with Google is valued at “tens of billions of dollars,” granting the company access to one million TPUs, underscoring the growing reliance on Google’s chip technology within the sector.
While the current agreement between Meta and Google is for cloud access to TPUs, The Information also indicated that Meta is in discussions with Google regarding a potential purchase of TPUs for its data centers, which could materialize as early as 2027. However, the outcome of these negotiations remains uncertain, indicating the complexity and evolving nature of Meta’s AI strategy.
In parallel to its negotiations with Google, Meta has also made headlines this week with a landmark $60 billion deal with Advanced Micro Devices (AMD). This agreement entails the provision of up to 6 gigawatts of AMD’s Instinct series GPUs over the next five years, aimed at bolstering Meta’s AI training and inference capacities across its global operations. The partnership will initially deploy AMD’s Instinct GPU on the MI450 platform, alongside the EPYC GPUs and the Helios rack-scale AI server architecture, which have been co-engineered by Meta specifically to enhance AI workloads.
The agreement with AMD also includes warrants that allow Meta to acquire up to 10% of AMD stock, vesting over the course of their partnership. This connection ties Meta’s computational needs to AMD’s financial performance, further solidifying their strategic alliance.
In addition to these collaborations, Meta is reportedly investing in the development of its own AI chips. The company has been working with Taiwan Semiconductor Manufacturing Co. (TSMC) to create an updated version of its Meta Training and Inference Accelerator (MITA) chips, which are anticipated to launch later this year. This endeavor showcases Meta’s commitment to strengthening its position in the AI market through both partnerships and internal development.
As the competitive landscape of AI technology continues to intensify, collaborations like those between Meta, Google, and AMD are poised to reshape the future of artificial intelligence, driving innovation and efficiency within the industry. The agreements underline the critical importance of advanced computing resources in the race to develop next-generation AI applications, setting the stage for further advancements and potential disruptions in the tech ecosystem.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility















































