Connect with us

Hi, what are you looking for?

Top Stories

Critical Hydra Flaw Exposes Hugging Face Models to Remote Code Execution Risks

Critical security flaws in Nvidia, Salesforce, and Apple’s AI libraries expose Hugging Face models to remote code execution risks, threatening open-source integrity.

Critical security vulnerabilities have been identified in several widely used open-source Python AI and machine learning libraries, which power models on the popular platform Hugging Face. These flaws expose the models to remote code execution (RCE) through poisoned metadata, raising concerns about the integrity of shared AI tools within the open-source ecosystem.

The affected libraries include NeMo from Nvidia, Uni2TS from Salesforce, and FlexTok, a collaborative effort between Apple and EPFL’s Visual Intelligence and Learning Lab. All three libraries utilize Hydra, an open-source configuration management tool maintained by Meta. The crux of the issue lies within Hydra’s hydra.utils.instantiate() function, which can execute any callable specified in configuration metadata, not limited to class constructors.

Malicious actors could exploit this vulnerability by publishing altered models containing harmful metadata. When these modified models are loaded, the poisoned metadata can trigger functions such as eval() or os.system(), facilitating arbitrary code execution. The vulnerabilities were discovered by the threat research team Unit 42 at Palo Alto Networks, which responsibly disclosed the findings to the maintainers of the affected libraries. Since then, fixes, advisories, and Common Vulnerabilities and Exposures (CVE) identifiers have been issued, although no confirmed exploitation activities have been reported in the wild.

According to Curtis Carmony, a malware research engineer at Unit 42, “Attackers would just need to create a modification of an existing popular model, with either a real or claimed benefit, and then add malicious metadata.” He cautioned that while formats like safetensors may seem secure, “there is a very large attack surface in the code that consumes them.”

This discovery highlights a broader risk within the open-source AI supply chain. Models on Hugging Face rely on over 100 Python libraries, nearly half of which incorporate Hydra, creating systemic vulnerabilities across the ecosystem. Although Meta has updated Hydra’s documentation to caution against RCE risks, a recommended block-list mechanism to mitigate these vulnerabilities has not yet been developed, further complicating efforts to secure shared open-source AI infrastructure.

The implications of these vulnerabilities extend beyond individual libraries. As open-source AI becomes increasingly integral to various applications, the need for robust security measures is paramount. The interconnected nature of these libraries means that a flaw in one can cascade, affecting numerous models and applications reliant on them.

Industry experts emphasize the importance of vigilance and proactive measures among developers and users of these libraries. Ensuring that models are sourced from trusted repositories and maintaining an awareness of ongoing security advisories can help mitigate potential risks associated with compromised metadata.

Looking forward, as the demand for AI solutions continues to grow, so too will the scrutiny of the libraries that support them. The open-source community must grapple with the challenges of maintaining security while fostering innovation. As more organizations adopt AI technologies, addressing vulnerabilities such as those revealed in this incident will be crucial for safeguarding the integrity of the AI supply chain.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

DigitalOcean's Inference Cloud Platform, in partnership with AMD, doubles Character.ai's inference throughput and cuts costs per token by 50%, supporting over a billion AI...

Top Stories

xAI tightens Grok's image editing features to block explicit content and protect minors, addressing rising regulatory pressures as AI laws loom.

AI Technology

China plans to regulate imports of Nvidia's H200 AI chips to support domestic semiconductor growth, potentially limiting foreign acquisitions by local firms.

Top Stories

AI-related cheating scandals at South Korean universities threaten reputations and global rankings, with Yonsei University reporting 34 students involved in altered clinical photos.

AI Regulation

UK regulators, led by the CMA and ICO, prioritize fostering AI innovation through regulatory sandboxes while addressing competition concerns and public safety.

AI Cybersecurity

One Identity releases Version 10.0 of its Identity Manager, enhancing identity governance with AI-assisted threat detection and automated response playbooks.

AI Technology

Brookings report warns that AI's rise may lead to "cognitive atrophy," risking critical thinking skills among students as reliance on tools like ChatGPT grows.

AI Tools

Syngenta partners with SAP to integrate AI across global operations, enhancing innovation and modernizing its supply chain with SAP Cloud ERP solutions.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.