Connect with us

Hi, what are you looking for?

Top Stories

TeleAI Launches Revolutionary Metric, “Information Capacity,” to Transform AI Model Evaluation

China Telecom’s TeleAI introduces “Information Capacity,” a groundbreaking metric to evaluate AI model efficiency, revolutionizing assessments by focusing on knowledge density over size.

Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant leap for the evaluation of artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a novel metric—Information Capacity—that shifts the focus from traditional size-based assessments of large language models (LLMs). This new approach asserts that the true “talent” of a model is determined not by its size, but by its efficiency in compressing and processing knowledge in relation to its computational cost.

Information capacity is defined as the ratio of a model’s intelligence to its inference complexity, essentially measuring the knowledge density embedded within the model. To illustrate, if a model is likened to a sponge and information equates to water, the information capacity indicates how effectively the sponge absorbs water. The findings from TeleAI demonstrate that models of varying sizes maintain a consistent information capacity, enabling a more equitable comparison of efficiency across different model series and providing accurate performance predictions within a given model series.

Guided by Professor Xuelong Li, the Chief Technology Officer and Chief Scientist at China Telecom, the TeleAI research team has utilized information capacity as a benchmark to evaluate an LLM’s capabilities. This innovative metric quantitatively assesses an LLM’s efficiency based on its compression performance relative to computational complexity. It not only highlights the intelligence density produced by a model per unit of computing resources but also aids in the optimal allocation of computational and communication resources within the AI Flow framework.

With the rising computational demands and energy consumption associated with inference workloads for large models, the need for accurate evaluation of inference efficiency has garnered increasing attention from LLM researchers. By implementing the information capacity metric, TeleAI has established a method for assessing the efficiency of large models across various architectures and sizes. Moreover, this metric can effectively guide the pre-training and deployment of models, further enhancing their utility.

This breakthrough offers a quantitative benchmark that could lead to more sustainable development practices for large models. It also facilitates the dynamic allocation of different-sized models to efficiently address tasks of varying complexities, a feature that aligns with the Device-Edge-Cloud infrastructure inherent to the AI Flow framework. As edge intelligence continues to evolve, this hierarchical network structure is set to challenge the conventional cloud-centric computing paradigm in the foreseeable future.

In a move to foster collaborative advancements in the field, TeleAI has made all relevant code and data from this research available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively push the boundaries of large model efficiency evaluation.

For more insights, the codebase can be accessed at GitHub, the dataset is available at Hugging Face, and a leaderboard for model evaluation can be found at Hugging Face Spaces.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Global semiconductor giants like TSMC and Samsung face capped innovation under new U.S.-China export controls, limiting advanced tech upgrades and reshaping supply chains.

AI Technology

China's draft regulations mandate AI providers like Baidu and Tencent to monitor emotional addiction in chatbots, aiming to prevent user dependency and enhance mental...

Top Stories

China's Cyberspace Administration proposes draft regulations mandating transparency and ethical safeguards for emotional AI, impacting major firms like Baidu and Alibaba.

Top Stories

China mandates AI companies to disclose user engagement every two hours and implement safety measures, aiming to shape global AI regulations and enhance user...

Top Stories

Putin's viral AI-generated Christmas video features symbolic gifts to world leaders, highlighting Russia's strategic alliances and a 50% shift towards ruble use in trade.

AI Regulation

China proposes new AI regulations mandating user alerts for addiction risks and strict content controls to enhance safety in emotional AI interactions.

AI Regulation

China unveils draft AI regulations mandating safety measures and emotional monitoring for human-interactive services, impacting 700 new generative models.

AI Research

DP Technology secures $114 million in Series C funding to enhance AI tools for scientific research, aiming to accelerate discoveries across multiple disciplines.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.