Connect with us

Hi, what are you looking for?

Top Stories

TeleAI Launches Pioneering ‘Information Capacity’ Metric to Transform AI Model Efficiency Evaluation

China Telecom’s TeleAI unveils the groundbreaking ‘Information Capacity’ metric, revolutionizing LLM efficiency evaluation by linking intelligence density to computational cost.

Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant advancement for evaluating artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a pioneering metric known as Information Capacity. This new assessment tool promises to reshape how large language models (LLMs) are analyzed, moving beyond traditional size-based metrics to focus on a model’s efficiency in knowledge compression and processing relative to its computational cost.

The concept of information capacity serves as a ratio between model intelligence and inference complexity, akin to a sponge’s efficiency in absorbing water. Thus, the greater the water absorbed and the faster the absorption, the more “intelligent” the model is perceived to be. Experimental results have indicated that models of differing sizes within a series display consistent information capacity. This consistency allows for a fair efficiency comparison across various model series and enables more accurate performance predictions within a single model series.

Under the leadership of Professor Xuelong Li, the CTO and Chief Scientist of China Telecom, the TeleAI research team has utilized information capacity as a metric to gauge an LLM’s talent. Their approach is driven by a strong correlation between compression and intelligence, providing a quantitative measurement of an LLM’s efficiency based on its compression performance relative to its computational complexity. This new metric reveals the intelligence density generated by a model per unit of computational cost and aids in the optimal allocation of computing and communication resources under the AI Flow framework.

As the inference workloads for large models increase, consuming more computational resources and energy, the need for accurate evaluations of inference efficiency has gained momentum among LLM researchers. With the introduction of information capacity, TeleAI has established a means to evaluate the efficiency of large models across different architectures and sizes. Additionally, this metric can effectively inform model pre-training and deployment strategies.

This research not only delivers a quantitative benchmark for the more environmentally sustainable development of large models but also facilitates the dynamic routing of models of varying sizes for efficiently handling tasks with different complexities. This adaptability is particularly relevant to the Device-Edge-Cloud infrastructure within the AI Flow framework, which is anticipated to transform the current cloud-centric computing paradigm as edge intelligence rapidly evolves.

To promote transparency and community collaboration, all relevant code and data from this research have been made available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively enhance the standardization of large model efficiency evaluation. The links to access the codebase, dataset, and leaderboard are as follows: codebase, dataset, and leaderboard.

As the landscape of artificial intelligence continues to evolve at a rapid pace, the introduction of information capacity could redefine performance benchmarks for LLMs and influence future research directions. The implications of this advancement could extend well beyond academic circles, impacting various sectors reliant on AI technologies, particularly as efficiency and resource management become increasingly critical in the field.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Global semiconductor giants like TSMC and Samsung face capped innovation under new U.S.-China export controls, limiting advanced tech upgrades and reshaping supply chains.

AI Technology

China's draft regulations mandate AI providers like Baidu and Tencent to monitor emotional addiction in chatbots, aiming to prevent user dependency and enhance mental...

Top Stories

China's Cyberspace Administration proposes draft regulations mandating transparency and ethical safeguards for emotional AI, impacting major firms like Baidu and Alibaba.

Top Stories

China mandates AI companies to disclose user engagement every two hours and implement safety measures, aiming to shape global AI regulations and enhance user...

Top Stories

Putin's viral AI-generated Christmas video features symbolic gifts to world leaders, highlighting Russia's strategic alliances and a 50% shift towards ruble use in trade.

AI Regulation

China proposes new AI regulations mandating user alerts for addiction risks and strict content controls to enhance safety in emotional AI interactions.

AI Regulation

China unveils draft AI regulations mandating safety measures and emotional monitoring for human-interactive services, impacting 700 new generative models.

AI Research

DP Technology secures $114 million in Series C funding to enhance AI tools for scientific research, aiming to accelerate discoveries across multiple disciplines.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.