Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant leap for the evaluation of artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a novel metric—Information Capacity—that shifts the focus from traditional size-based assessments of large language models (LLMs). This new approach asserts that the true “talent” of a model is determined not by its size, but by its efficiency in compressing and processing knowledge in relation to its computational cost.
Information capacity is defined as the ratio of a model’s intelligence to its inference complexity, essentially measuring the knowledge density embedded within the model. To illustrate, if a model is likened to a sponge and information equates to water, the information capacity indicates how effectively the sponge absorbs water. The findings from TeleAI demonstrate that models of varying sizes maintain a consistent information capacity, enabling a more equitable comparison of efficiency across different model series and providing accurate performance predictions within a given model series.
Guided by Professor Xuelong Li, the Chief Technology Officer and Chief Scientist at China Telecom, the TeleAI research team has utilized information capacity as a benchmark to evaluate an LLM’s capabilities. This innovative metric quantitatively assesses an LLM’s efficiency based on its compression performance relative to computational complexity. It not only highlights the intelligence density produced by a model per unit of computing resources but also aids in the optimal allocation of computational and communication resources within the AI Flow framework.
With the rising computational demands and energy consumption associated with inference workloads for large models, the need for accurate evaluation of inference efficiency has garnered increasing attention from LLM researchers. By implementing the information capacity metric, TeleAI has established a method for assessing the efficiency of large models across various architectures and sizes. Moreover, this metric can effectively guide the pre-training and deployment of models, further enhancing their utility.
This breakthrough offers a quantitative benchmark that could lead to more sustainable development practices for large models. It also facilitates the dynamic allocation of different-sized models to efficiently address tasks of varying complexities, a feature that aligns with the Device-Edge-Cloud infrastructure inherent to the AI Flow framework. As edge intelligence continues to evolve, this hierarchical network structure is set to challenge the conventional cloud-centric computing paradigm in the foreseeable future.
In a move to foster collaborative advancements in the field, TeleAI has made all relevant code and data from this research available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively push the boundaries of large model efficiency evaluation.
For more insights, the codebase can be accessed at GitHub, the dataset is available at Hugging Face, and a leaderboard for model evaluation can be found at Hugging Face Spaces.
See also
AI Dominates Corporate Finance Priorities: 73% of Leaders Cite Urgent Need for Digital Skills
Google Launches FunctionGemma for On-Device Function Calling with 85% Accuracy Boost
BigBear.ai’s AI-Radar Integration with C Speed Enhances Defense Investments and Revenue Potential
UK’s BoE Governor Bailey: AI Will Disrupt Jobs Market, Urges Immediate Upskilling
UK High Court Allows Getty to Appeal Key Ruling on AI Copyright Infringement


















































