Connect with us

Hi, what are you looking for?

Top Stories

Critical Hydra Flaw Exposes Hugging Face Models to Remote Code Execution Risks

Critical security flaws in Nvidia, Salesforce, and Apple’s AI libraries expose Hugging Face models to remote code execution risks, threatening open-source integrity.

Critical security vulnerabilities have been identified in several widely used open-source Python AI and machine learning libraries, which power models on the popular platform Hugging Face. These flaws expose the models to remote code execution (RCE) through poisoned metadata, raising concerns about the integrity of shared AI tools within the open-source ecosystem.

The affected libraries include NeMo from Nvidia, Uni2TS from Salesforce, and FlexTok, a collaborative effort between Apple and EPFL’s Visual Intelligence and Learning Lab. All three libraries utilize Hydra, an open-source configuration management tool maintained by Meta. The crux of the issue lies within Hydra’s hydra.utils.instantiate() function, which can execute any callable specified in configuration metadata, not limited to class constructors.

Malicious actors could exploit this vulnerability by publishing altered models containing harmful metadata. When these modified models are loaded, the poisoned metadata can trigger functions such as eval() or os.system(), facilitating arbitrary code execution. The vulnerabilities were discovered by the threat research team Unit 42 at Palo Alto Networks, which responsibly disclosed the findings to the maintainers of the affected libraries. Since then, fixes, advisories, and Common Vulnerabilities and Exposures (CVE) identifiers have been issued, although no confirmed exploitation activities have been reported in the wild.

According to Curtis Carmony, a malware research engineer at Unit 42, “Attackers would just need to create a modification of an existing popular model, with either a real or claimed benefit, and then add malicious metadata.” He cautioned that while formats like safetensors may seem secure, “there is a very large attack surface in the code that consumes them.”

This discovery highlights a broader risk within the open-source AI supply chain. Models on Hugging Face rely on over 100 Python libraries, nearly half of which incorporate Hydra, creating systemic vulnerabilities across the ecosystem. Although Meta has updated Hydra’s documentation to caution against RCE risks, a recommended block-list mechanism to mitigate these vulnerabilities has not yet been developed, further complicating efforts to secure shared open-source AI infrastructure.

The implications of these vulnerabilities extend beyond individual libraries. As open-source AI becomes increasingly integral to various applications, the need for robust security measures is paramount. The interconnected nature of these libraries means that a flaw in one can cascade, affecting numerous models and applications reliant on them.

Industry experts emphasize the importance of vigilance and proactive measures among developers and users of these libraries. Ensuring that models are sourced from trusted repositories and maintaining an awareness of ongoing security advisories can help mitigate potential risks associated with compromised metadata.

Looking forward, as the demand for AI solutions continues to grow, so too will the scrutiny of the libraries that support them. The open-source community must grapple with the challenges of maintaining security while fostering innovation. As more organizations adopt AI technologies, addressing vulnerabilities such as those revealed in this incident will be crucial for safeguarding the integrity of the AI supply chain.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

AI's shift to intent engineering enhances user-AI interactions by prioritizing contextual understanding over prompt precision, fostering collaborative problem-solving.

AI Research

Vertex Group launches a ₹100 crore Responsible AI Lab in Gurugram to drive ethical AI innovation and support its ₹1,000 crore valuation goal.

AI Generative

Microsoft unveils Copilot Canvas, an AI-driven workspace featuring real-time generative image capabilities and advanced collaboration tools, enhancing team productivity.

AI Cybersecurity

AI-driven deepfake fraud threatens India's financial sector, with losses expected to exceed ₹20,000 crore by 2025, highlighting urgent cybersecurity needs.

Top Stories

SURXRAT expands its malware capabilities by incorporating a 23GB LLM module from Hugging Face, enhancing surveillance and exploitation tactics for cybercriminals.

AI Technology

Halo stocks have surged 35% since 2025, driving UK and EU markets to record highs as investors pivot from AI giants to capital-intensive firms.

Top Stories

A 14-year-old's suicide linked to an AI chatbot prompts a lawsuit against Character.AI, highlighting urgent calls for stronger protections for vulnerable users.

AI Business

Citrini Research warns that U.S. SaaS employment could plummet to 10% by 2028, but industry experts like Snowflake's CEO stress the need for stable...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.