Connect with us

Hi, what are you looking for?

Top Stories

Multiverse Computing Launches HyperNova 60B 2602, 50% Compressed OpenAI Model on Hugging Face

Multiverse Computing launches the HyperNova 60B 2602, a 50% compressed OpenAI model, enhancing AI capabilities while cutting resource demands by nearly half.

Multiverse Computing has unveiled the HyperNova 60B 2602, a 50% compressed version of OpenAI’s gpt-oss-120B, as part of its strategy to provide developers with hyper-efficient, high-performance models at no cost. The announcement was made on February 24, 2026, in Donostia, Spain, and the new model is now available for free on Hugging Face. This release follows the initial HyperNova 60B debut in January, and it boasts enhancements in tool calling and agentic coding capabilities, underscoring Multiverse’s commitment to democratizing access to advanced AI technologies.

As the demands on infrastructure grow, developers face increasing limitations in deploying large language models (LLMs) effectively. Multiverse aims to mitigate these challenges by creating efficient models that maintain advanced reasoning abilities while significantly reducing model size and resource requirements. The HyperNova series exemplifies this philosophy, delivering powerful AI tools without the typical trade-offs between performance and accessibility.

At the core of Multiverse’s approach is its proprietary technology, CompactifAI, which employs quantum-inspired mathematics to optimize neural networks. This algorithm has the potential to reduce model sizes by up to 95% while keeping precision loss within a narrow 2-3% margin. This is a marked improvement over conventional compression methods, which often see accuracy losses of 20-30%. As a result, developers can utilize sophisticated models that demand significantly less computational power, memory, and energy.

Enrique Lizaso Olmos, CEO of Multiverse Computing, emphasized the iterative nature of model compression, stating, “The launch of HyperNova 60B 2602 demonstrates compression as an iterative process of improvement, not a one-time optimization.” He highlighted that each new generation of compressed models expands the boundaries of what is achievable with efficient AI. By continually refining their offerings and making them openly accessible, Multiverse empowers developers to explore and deploy AI solutions without incurring substantial infrastructure costs.

The latest model, HyperNova 60B 2602, arrives in response to positive feedback from users of its predecessor. It showcases substantial improvements across critical benchmarks, particularly in tool calling and agentic workflows. Key performance enhancements include a fivefold increase in agentic tool usage as measured by Tau2-Bench, a twofold improvement in agentic coding and terminal use according to the Terminal Bench Hard metric, and a 1.5 times enhancement in function calling capabilities measured by BFCL v4.

The new model maintains nearly equivalent tool-calling capabilities compared to the larger OpenAI gpt-oss-120B, while reducing its size from 61GB to 32GB. This advancement validates the potential of compression technologies for production-level AI applications and enhances the model’s deployability across various sectors.

With the release of HyperNova 60B 2602, Multiverse continues to broaden access to production-ready AI models suited for real-world applications in enterprise, research, and public sectors. The company plans to expand its portfolio by introducing more open-source models and updates throughout the year, catering to a variety of use cases from enterprise-level systems to applications on edge devices.

Multiverse Computing is strategically positioned to offer sovereign solutions across the AI landscape, and its open-source approach is designed to assist a burgeoning global community of developers and IT professionals evaluating AI technologies for commercial and internal use. This open access allows organizations to assess performance, security, and operational suitability prior to large-scale deployment, facilitating a smoother integration process and enhancing organizational control.

For those interested, all of Multiverse’s models, including HyperNova 60B 2602, can be accessed on Hugging Face at https://huggingface.co/MultiverseComputingCAI. Accompanying each release are technical documentation, benchmarks, and integration guides available on the same platform. To learn more about the company’s innovations in compressed AI models, visit multiversecomputing.com.

About Multiverse Computing

Multiverse Computing is a pioneer in quantum-inspired AI model compression, leveraging expertise in quantum software to create its innovative CompactifAI technology. The company, headquartered in Donostia, Spain, has a global presence with offices in the United States, Canada, and across Europe, serving over 100 clients, including major corporations such as Iberdrola, Bosch, and the Bank of Canada.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

OpenAI has acquired fintech start-up Hiro, enhancing its AI personal finance tools aimed at democratizing financial advice for users managing over $1 billion in...

Top Stories

Google's Gemini AI model claims 91% accuracy, yet it generates tens of millions of errors annually, raising alarms about misinformation in search results

AI Research

OpenAI's GPT-5 autonomously conducts 36,000 biological experiments, cutting protein production costs by 40% while raising biosecurity concerns.

AI Government

Leopold Aschenbrenner warns that AI could surpass college graduates by 2026, posing unprecedented national security risks reminiscent of the atomic bomb.

AI Finance

OpenAI acquires Hiro Finance to enhance ChatGPT's capabilities in corporate finance, aiming to leverage Hiro's specialized team for improved accuracy and user engagement.

Top Stories

Stanford's AI Index reveals U.S. investment of $285.9B eclipses China's $12.4B, yet 95% of AI projects see no ROI and model gap narrows to...

AI Generative

Anthropic unveils Claude Opus 4.7, enhancing AI capabilities, while launching a full-stack app platform to streamline developer workflows.

Top Stories

MiniMax launches the free M2.7 AI model with 229 billion parameters, outperforming Gemini 3.1 Pro in key benchmarks and enhancing multi-agent capabilities.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.