Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google’s Antigravity AI Tool Hacked Within 24 Hours, Exposes Severe Vulnerability

Security researcher Aaron Portnoy uncovers a critical vulnerability in Google’s Antigravity AI tool just 24 hours post-launch, enabling malware installation on user systems.

A security researcher has uncovered a significant vulnerability in Google’s latest AI coding tool, Antigravity, just 24 hours after its release. Aaron Portnoy, the researcher, identified a flaw that could potentially allow malicious actors to manipulate the AI’s rules to install malware on users’ computers. This incident highlights ongoing concerns around the security of rapidly deployed AI technologies.

By modifying Antigravity’s configuration settings, Portnoy was able to create a “backdoor” that could inject code into a user’s system, enabling activities such as data theft or ransomware attacks. The exploit affected both Windows and Mac PCs and only required the user to execute the code once, misleadingly labeled as “trusted.” Such tactics are common in social engineering, where hackers present themselves as trustworthy developers sharing beneficial tools.

This breach is not an isolated case but exemplifies a troubling trend in the rapid release of AI products lacking adequate security testing. Cybersecurity experts are increasingly engaged in a game of cat and mouse, seeking to identify vulnerabilities before they can be exploited. Gadi Evron, cofounder and CEO of Knostic, noted that AI coding agents are “very vulnerable, often based on older technologies and never patched.”

Portnoy described the current landscape of AI vulnerabilities as reminiscent of the late 1990s, stating, “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s.” He emphasized that many AI systems are launched with excessive trust assumptions and minimal protective boundaries. Following his findings, Google initiated an investigation but has yet to release a patch or identify any settings that could mitigate the vulnerability.

A Google spokesperson, Ryan Trostle, expressed the company’s commitment to addressing security issues and encouraged researchers to report vulnerabilities to facilitate timely fixes. However, reports suggest that at least two other significant vulnerabilities exist in Antigravity, both permitting unauthorized access to files on users’ systems.

Portnoy’s vulnerability is particularly alarming because it persists even when restricted settings are enabled. The malicious code reactivates each time a user restarts any Antigravity project, making it challenging to eradicate without direct intervention. Uninstalling Antigravity does not resolve the issue, as users must locate and remove the backdoor on Google’s system.

The problem isn’t unique to Google; Evron pointed out that many AI tools are inherently insecure due to their design and the significant permissions they require. He also highlighted how the practice of developers copying code from online sources inadvertently propagates vulnerabilities. Recently, cybersecurity expert Marcus Hutchins raised concerns about fake recruiters targeting IT professionals on LinkedIn, sending them malware-laden code under the guise of job applications.

The “agentic” nature of these AI tools, which allows for autonomous task execution without human oversight, exacerbates the risks. This combination of autonomy and access to sensitive data can make vulnerabilities easier to exploit and far more damaging, according to Portnoy. His team is currently investigating 18 weaknesses across competing AI coding tools, having recently identified four issues in the Cline AI coding assistant that could also allow malware installation.

Despite Google’s requirement that Antigravity users declare they trust the code they are loading, Portnoy critiques this measure as insufficient for meaningful security. Users who refuse to accept code as trusted are restricted from accessing the tool’s essential features, pushing many IT professionals to choose convenience over caution.

Portnoy proposed that Google should implement notifications or warnings every time Antigravity is about to execute user-uploaded code, rather than relying solely on user trust. In reviewing how Antigravity’s AI processed his malicious code, Portnoy found that while the AI recognized the issue, it struggled to determine a safe course of action, illustrating a “catch-22” scenario that hackers can exploit.

The emergence of these vulnerabilities raises significant questions about the safety of AI development. As the industry continues to prioritize rapid deployment, the need for robust security measures becomes ever more critical. With cybersecurity threats evolving, companies must take a more proactive stance to safeguard their technologies and protect users from potential harm.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Generative

OpenAI unveils GPT-5.4 with a groundbreaking 1 million token context window and six major enhancements, redefining AI interactions for ChatGPT users.

AI Technology

Google unveils Ask Maps, an AI-driven feature leveraging its Gemini model, transforming navigation for over 1 billion users into personalized trip planning

AI Generative

Google unveils Gemini Embedding 2, its first multimodal AI model, enabling developers to seamlessly embed text, images, audio, and video for enhanced data retrieval.

Top Stories

Google reveals Genie 3, a generative AI model enhancing real-time gaming environments, but struggles with memory limitations after one minute

AI Generative

Google's suite of AI tools, including NotebookLM and Gemini Gems, is transforming workflows for 2026 professionals by integrating advanced capabilities at little to no...

Top Stories

OpenAI integrates its AI video generator Sora into ChatGPT, enhancing its capabilities and responding to user demand amid rising competition in the AI content...

AI Marketing

Google's Android 16 QPR3 introduces limited AI-generated custom app icons for Pixel devices, offering only five styles that struggle with popular third-party apps.

AI Technology

Tech giants like Google and IBM now prioritize arts graduates in AI roles, offering salaries up to ₹25 lakh as firms embrace skills over...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.