Connect with us

Hi, what are you looking for?

Top Stories

TeleAI Launches Revolutionary Metric, “Information Capacity,” to Transform AI Model Evaluation

China Telecom’s TeleAI introduces “Information Capacity,” a groundbreaking metric to evaluate AI model efficiency, revolutionizing assessments by focusing on knowledge density over size.

Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant leap for the evaluation of artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a novel metric—Information Capacity—that shifts the focus from traditional size-based assessments of large language models (LLMs). This new approach asserts that the true “talent” of a model is determined not by its size, but by its efficiency in compressing and processing knowledge in relation to its computational cost.

Information capacity is defined as the ratio of a model’s intelligence to its inference complexity, essentially measuring the knowledge density embedded within the model. To illustrate, if a model is likened to a sponge and information equates to water, the information capacity indicates how effectively the sponge absorbs water. The findings from TeleAI demonstrate that models of varying sizes maintain a consistent information capacity, enabling a more equitable comparison of efficiency across different model series and providing accurate performance predictions within a given model series.

Guided by Professor Xuelong Li, the Chief Technology Officer and Chief Scientist at China Telecom, the TeleAI research team has utilized information capacity as a benchmark to evaluate an LLM’s capabilities. This innovative metric quantitatively assesses an LLM’s efficiency based on its compression performance relative to computational complexity. It not only highlights the intelligence density produced by a model per unit of computing resources but also aids in the optimal allocation of computational and communication resources within the AI Flow framework.

With the rising computational demands and energy consumption associated with inference workloads for large models, the need for accurate evaluation of inference efficiency has garnered increasing attention from LLM researchers. By implementing the information capacity metric, TeleAI has established a method for assessing the efficiency of large models across various architectures and sizes. Moreover, this metric can effectively guide the pre-training and deployment of models, further enhancing their utility.

This breakthrough offers a quantitative benchmark that could lead to more sustainable development practices for large models. It also facilitates the dynamic allocation of different-sized models to efficiently address tasks of varying complexities, a feature that aligns with the Device-Edge-Cloud infrastructure inherent to the AI Flow framework. As edge intelligence continues to evolve, this hierarchical network structure is set to challenge the conventional cloud-centric computing paradigm in the foreseeable future.

In a move to foster collaborative advancements in the field, TeleAI has made all relevant code and data from this research available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively push the boundaries of large model efficiency evaluation.

For more insights, the codebase can be accessed at GitHub, the dataset is available at Hugging Face, and a leaderboard for model evaluation can be found at Hugging Face Spaces.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

China's AI governance model, shaped by state, private sector, and societal influences, sees 23 of the world's top AI products from Chinese firms generating...

Top Stories

ByteDance invests billions in AI by 2026, launching Doubao with over 100M daily users, positioning itself as a leader amid rising global competition.

AI Tools

Chinese AI firms leverage robust domestic data and talent to thrive globally, as Alibaba Fund CEO Cindy Chow emphasizes their resilience amid U.S.-China tensions.

AI Generative

Zhipu AI unveils GLM-5, a groundbreaking multimodal model that outperforms OpenAI's GPT-4o in key benchmarks, reshaping global AI competition.

Top Stories

China conditionally approves DeepSeek's purchase of Nvidia's powerful H200 chips, signaling a shift in chip import policies amid US-China tech tensions.

Top Stories

China grants AI startup DeepSeek conditional access to Nvidia's H200 chips, marking a pivotal shift in U.S. tech export policies amid growing demand.

Top Stories

Baidu's Robin Li anticipates a pivotal 2025 for AI adoption, as Ernie Bot 5.0 targets niche applications while navigating a $244 billion global market.

AI Technology

Nvidia CEO Jensen Huang visits Shanghai as China readies to lift its import ban on the H200 AI chips, crucial for the company's growth...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.