Google engineers have introduced a groundbreaking method to compress artificial intelligence (AI) data, enabling it to require up to six times less working memory for effective functioning. The new system, named TurboQuant, allows AI algorithms to maintain the same amount of information and perform equally robust computations, while significantly reducing the hardware memory requirements, according to the company.
AI algorithms traditionally demand substantial working memory, often referred to as key value (KV) cache, for optimal performance. This cache temporarily stores immediate computational results and other pertinent information during active processing. For instance, when users query systems such as ChatGPT about the weather, the system stores key terms and contextual data in the KV cache to generate accurate responses. A larger KV cache allows more information to be processed simultaneously, enhancing the AI’s performance.
Each sentence may utilize only a few dozen tokens—the basic units of AI prompts and responses—but more advanced tasks can necessitate the storage of hundreds of thousands of tokens, which translates into memory requirements scaling into tens of gigabytes. As ChatGPT faces billions of requests daily, its memory demands increase linearly with user activity.
TurboQuant’s compression algorithm aims to reduce the memory needed for AI models during these computations through a process called quantization. This technique represents values with fewer bits, streamlining the overall memory requirements. While Google has long employed quantization in its neural networks, it typically applied this strategy statically, meaning the compression happens once and does not adapt during model operation. TurboQuant innovates by dynamically reducing the KV cache’s memory in real-time, a complex challenge that ensures data remains accurate and updated as the model generates outputs.
In recent tests involving Meta’s Llama 3.1-8B, Google’s Gemma, and Mistral AI models, TurboQuant demonstrated significant potential for alleviating key-value bottlenecks without compromising AI performance. Google representatives noted the findings could have “potentially profound implications for all compression-reliant use cases, particularly in domains like search and AI.”
TurboQuant could theoretically minimize the KV cache’s size by at least a factor of six, employing two techniques: PolarQuant and Quantized Johnson-Lindenstrauss (QJL). Understanding these methods involves recognizing that data in the working memory of AI is transformed into vectors—numeric groups defined by size and direction. PolarQuant reformulates AI data from Cartesian coordinates into polar coordinates, aligning vector angles more consistently for improved compression. Following this, the QJL optimization method fine-tunes the vectors slightly to rectify any computational errors stemming from the quantization process.
Matthew Prince, CEO of Cloudflare, referred to the breakthrough as “Google’s DeepSeek moment,” drawing a parallel to the unexpected release of a Chinese AI model that achieved remarkable results with lower costs. The unveiling of TurboQuant on March 24 resulted in a significant drop in stocks for memory companies such as SanDisk, Western Digital, and Seagate. Despite its potential to enhance AI efficiency, the technology remains in the laboratory phase and has not yet seen widespread implementation.
It is crucial to note that TurboQuant only compresses working memory during inference—the process of generating responses to prompts. The training phase of these models often requires up to four times more memory than inference, meaning the overall impact on memory usage may be relatively modest. As Merrill Lynch analyst Vivek Arya communicated to concerned investors, the “6x improvement in memory efficiency [will] likely [lead] to 6x increase in accuracy (model size) and/or context length (KV cache allocation), rather than a 6x decrease in memory.”
Google officially introduced TurboQuant at the ICLR 2026 conference from April 23-27 in Rio de Janeiro and will present the PolarQuant and QJL techniques at AISTATS 2026 in Tangier, Morocco, in early May, signaling a promising future for AI data compression and efficiency.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions

















































