Connect with us

Hi, what are you looking for?

Top Stories

TeleAI Launches Pioneering ‘Information Capacity’ Metric to Transform AI Model Efficiency Evaluation

China Telecom’s TeleAI unveils the groundbreaking ‘Information Capacity’ metric, revolutionizing LLM efficiency evaluation by linking intelligence density to computational cost.

Beijing, Dec. 19, 2025 (GLOBE NEWSWIRE) — In a significant advancement for evaluating artificial intelligence, the Institute of Artificial Intelligence of China Telecom (TeleAI) has unveiled a pioneering metric known as Information Capacity. This new assessment tool promises to reshape how large language models (LLMs) are analyzed, moving beyond traditional size-based metrics to focus on a model’s efficiency in knowledge compression and processing relative to its computational cost.

The concept of information capacity serves as a ratio between model intelligence and inference complexity, akin to a sponge’s efficiency in absorbing water. Thus, the greater the water absorbed and the faster the absorption, the more “intelligent” the model is perceived to be. Experimental results have indicated that models of differing sizes within a series display consistent information capacity. This consistency allows for a fair efficiency comparison across various model series and enables more accurate performance predictions within a single model series.

Under the leadership of Professor Xuelong Li, the CTO and Chief Scientist of China Telecom, the TeleAI research team has utilized information capacity as a metric to gauge an LLM’s talent. Their approach is driven by a strong correlation between compression and intelligence, providing a quantitative measurement of an LLM’s efficiency based on its compression performance relative to its computational complexity. This new metric reveals the intelligence density generated by a model per unit of computational cost and aids in the optimal allocation of computing and communication resources under the AI Flow framework.

As the inference workloads for large models increase, consuming more computational resources and energy, the need for accurate evaluations of inference efficiency has gained momentum among LLM researchers. With the introduction of information capacity, TeleAI has established a means to evaluate the efficiency of large models across different architectures and sizes. Additionally, this metric can effectively inform model pre-training and deployment strategies.

This research not only delivers a quantitative benchmark for the more environmentally sustainable development of large models but also facilitates the dynamic routing of models of varying sizes for efficiently handling tasks with different complexities. This adaptability is particularly relevant to the Device-Edge-Cloud infrastructure within the AI Flow framework, which is anticipated to transform the current cloud-centric computing paradigm as edge intelligence rapidly evolves.

To promote transparency and community collaboration, all relevant code and data from this research have been made available on GitHub and Hugging Face. This open-source initiative empowers the AI community to collectively enhance the standardization of large model efficiency evaluation. The links to access the codebase, dataset, and leaderboard are as follows: codebase, dataset, and leaderboard.

As the landscape of artificial intelligence continues to evolve at a rapid pace, the introduction of information capacity could redefine performance benchmarks for LLMs and influence future research directions. The implications of this advancement could extend well beyond academic circles, impacting various sectors reliant on AI technologies, particularly as efficiency and resource management become increasingly critical in the field.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

China's AI governance model, shaped by state, private sector, and societal influences, sees 23 of the world's top AI products from Chinese firms generating...

Top Stories

ByteDance invests billions in AI by 2026, launching Doubao with over 100M daily users, positioning itself as a leader amid rising global competition.

AI Tools

Chinese AI firms leverage robust domestic data and talent to thrive globally, as Alibaba Fund CEO Cindy Chow emphasizes their resilience amid U.S.-China tensions.

AI Generative

Zhipu AI unveils GLM-5, a groundbreaking multimodal model that outperforms OpenAI's GPT-4o in key benchmarks, reshaping global AI competition.

AI Generative

Sridhar Vembu of Zoho advocates for India to invest in smaller, energy-efficient AI models over costly large language models, estimating a $50B-$100B development burden.

Top Stories

China conditionally approves DeepSeek's purchase of Nvidia's powerful H200 chips, signaling a shift in chip import policies amid US-China tech tensions.

Top Stories

China grants AI startup DeepSeek conditional access to Nvidia's H200 chips, marking a pivotal shift in U.S. tech export policies amid growing demand.

AI Technology

Flapping Airplanes launches with $180M in seed funding from Google Ventures and Sequoia to disrupt AI development by prioritizing fundamental research over scaling.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.