Connect with us

Hi, what are you looking for?

Top Stories

TeleAI Launches Revolutionary Metric, “Information Capacity,” to Transform AI Model Evaluation

China Telecom’s TeleAI introduces “Information Capacity,” a groundbreaking metric to evaluate AI model efficiency, revolutionizing assessments by focusing on knowledge density over size.

Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant leap for the evaluation of artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a novel metric—Information Capacity—that shifts the focus from traditional size-based assessments of large language models (LLMs). This new approach asserts that the true “talent” of a model is determined not by its size, but by its efficiency in compressing and processing knowledge in relation to its computational cost.

Information capacity is defined as the ratio of a model’s intelligence to its inference complexity, essentially measuring the knowledge density embedded within the model. To illustrate, if a model is likened to a sponge and information equates to water, the information capacity indicates how effectively the sponge absorbs water. The findings from TeleAI demonstrate that models of varying sizes maintain a consistent information capacity, enabling a more equitable comparison of efficiency across different model series and providing accurate performance predictions within a given model series.

Guided by Professor Xuelong Li, the Chief Technology Officer and Chief Scientist at China Telecom, the TeleAI research team has utilized information capacity as a benchmark to evaluate an LLM’s capabilities. This innovative metric quantitatively assesses an LLM’s efficiency based on its compression performance relative to computational complexity. It not only highlights the intelligence density produced by a model per unit of computing resources but also aids in the optimal allocation of computational and communication resources within the AI Flow framework.

With the rising computational demands and energy consumption associated with inference workloads for large models, the need for accurate evaluation of inference efficiency has garnered increasing attention from LLM researchers. By implementing the information capacity metric, TeleAI has established a method for assessing the efficiency of large models across various architectures and sizes. Moreover, this metric can effectively guide the pre-training and deployment of models, further enhancing their utility.

This breakthrough offers a quantitative benchmark that could lead to more sustainable development practices for large models. It also facilitates the dynamic allocation of different-sized models to efficiently address tasks of varying complexities, a feature that aligns with the Device-Edge-Cloud infrastructure inherent to the AI Flow framework. As edge intelligence continues to evolve, this hierarchical network structure is set to challenge the conventional cloud-centric computing paradigm in the foreseeable future.

In a move to foster collaborative advancements in the field, TeleAI has made all relevant code and data from this research available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively push the boundaries of large model efficiency evaluation.

For more insights, the codebase can be accessed at GitHub, the dataset is available at Hugging Face, and a leaderboard for model evaluation can be found at Hugging Face Spaces.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

China's General AI Forum reveals new governance strategies prioritizing inclusiveness and collaboration, as Beijing aims to lead global AI development by 2030.

AI Technology

China's Ministry of Industry warns of vulnerabilities in 23,000 OpenClaw AI assets, raising alarms over potential cyberattacks and data leaks.

AI Finance

Beijing Financial Court enhances legal risk assessments for AI applications while managing a $204B backlog, aiming for robust financial adjudication.

AI Business

Moonshot AI seeks to raise $1 billion in funding, potentially boosting its valuation to $18 billion in just three months amid growing investor interest...

AI Regulation

China's rapid adoption of OpenClaw, an AI tool embraced by tech giants like Tencent and Alibaba, sparks urgent data security concerns as youth unemployment...

AI Regulation

China's defense ministry calls for robust international AI regulations to prevent military misuse amid rising concerns over technology’s ethical implications in warfare.

AI Regulation

Swiss investors eye Nasdaq 100's 1.80% rise to 25,087 as China's tech policy shifts threaten AI chip exports and adjust earnings forecasts ahead of...

AI Research

China's DeepRare AI system achieves over 70% diagnostic accuracy for rare diseases, revolutionizing patient care and addressing critical healthcare challenges.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.