Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google’s Antigravity AI Tool Hacked Within 24 Hours, Exposes Severe Vulnerability

Security researcher Aaron Portnoy uncovers a critical vulnerability in Google’s Antigravity AI tool just 24 hours post-launch, enabling malware installation on user systems.

A security researcher has uncovered a significant vulnerability in Google’s latest AI coding tool, Antigravity, just 24 hours after its release. Aaron Portnoy, the researcher, identified a flaw that could potentially allow malicious actors to manipulate the AI’s rules to install malware on users’ computers. This incident highlights ongoing concerns around the security of rapidly deployed AI technologies.

By modifying Antigravity’s configuration settings, Portnoy was able to create a “backdoor” that could inject code into a user’s system, enabling activities such as data theft or ransomware attacks. The exploit affected both Windows and Mac PCs and only required the user to execute the code once, misleadingly labeled as “trusted.” Such tactics are common in social engineering, where hackers present themselves as trustworthy developers sharing beneficial tools.

This breach is not an isolated case but exemplifies a troubling trend in the rapid release of AI products lacking adequate security testing. Cybersecurity experts are increasingly engaged in a game of cat and mouse, seeking to identify vulnerabilities before they can be exploited. Gadi Evron, cofounder and CEO of Knostic, noted that AI coding agents are “very vulnerable, often based on older technologies and never patched.”

Portnoy described the current landscape of AI vulnerabilities as reminiscent of the late 1990s, stating, “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s.” He emphasized that many AI systems are launched with excessive trust assumptions and minimal protective boundaries. Following his findings, Google initiated an investigation but has yet to release a patch or identify any settings that could mitigate the vulnerability.

A Google spokesperson, Ryan Trostle, expressed the company’s commitment to addressing security issues and encouraged researchers to report vulnerabilities to facilitate timely fixes. However, reports suggest that at least two other significant vulnerabilities exist in Antigravity, both permitting unauthorized access to files on users’ systems.

Portnoy’s vulnerability is particularly alarming because it persists even when restricted settings are enabled. The malicious code reactivates each time a user restarts any Antigravity project, making it challenging to eradicate without direct intervention. Uninstalling Antigravity does not resolve the issue, as users must locate and remove the backdoor on Google’s system.

The problem isn’t unique to Google; Evron pointed out that many AI tools are inherently insecure due to their design and the significant permissions they require. He also highlighted how the practice of developers copying code from online sources inadvertently propagates vulnerabilities. Recently, cybersecurity expert Marcus Hutchins raised concerns about fake recruiters targeting IT professionals on LinkedIn, sending them malware-laden code under the guise of job applications.

The “agentic” nature of these AI tools, which allows for autonomous task execution without human oversight, exacerbates the risks. This combination of autonomy and access to sensitive data can make vulnerabilities easier to exploit and far more damaging, according to Portnoy. His team is currently investigating 18 weaknesses across competing AI coding tools, having recently identified four issues in the Cline AI coding assistant that could also allow malware installation.

Despite Google’s requirement that Antigravity users declare they trust the code they are loading, Portnoy critiques this measure as insufficient for meaningful security. Users who refuse to accept code as trusted are restricted from accessing the tool’s essential features, pushing many IT professionals to choose convenience over caution.

Portnoy proposed that Google should implement notifications or warnings every time Antigravity is about to execute user-uploaded code, rather than relying solely on user trust. In reviewing how Antigravity’s AI processed his malicious code, Portnoy found that while the AI recognized the issue, it struggled to determine a safe course of action, illustrating a “catch-22” scenario that hackers can exploit.

The emergence of these vulnerabilities raises significant questions about the safety of AI development. As the industry continues to prioritize rapid deployment, the need for robust security measures becomes ever more critical. With cybersecurity threats evolving, companies must take a more proactive stance to safeguard their technologies and protect users from potential harm.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

Google CEO Sundar Pichai calls for balanced AI regulations as the company invests $40 billion in Texas data centers to compete with fast-moving China.

Top Stories

OpenAI plans to relax strict sexual content policies in December 2025, allowing verified adults access while ensuring mental health safeguards.

AI Cybersecurity

Google's Antigravity platform aims to revolutionize coding efficiency but raises significant security concerns, prompting scrutiny from experts like Dr. Lisa Thompson.

Top Stories

Users accessing Perplexity.in are unexpectedly redirected to Google Gemini, highlighting a critical domain oversight as Perplexity focuses solely on its global domain.

AI Generative

Icaro Lab's study reveals that poetic phrasing enables a 62% success rate in bypassing safety measures in major LLMs from OpenAI, Google, and Anthropic.

AI Technology

Google introduces Private AI Compute, leveraging AMD's Trusted Execution Environment for enhanced data privacy, ensuring secure AI processing and user data protection.

AI Research

Researchers find that 62% of AI models from firms like Google and OpenAI bypass safety measures using poetic prompts to elicit harmful content.

AI Generative

Google restricts free access to its Nano Banana AI image generator to two images daily amid soaring demand, signaling challenges in scaling popular tech...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.