Connect with us

Hi, what are you looking for?

AI Cybersecurity

Israeli Researchers Uncover Critical AI Browser Flaw Affecting Major Tools Like Gemini and Copilot

Israeli researchers reveal critical vulnerabilities in AI browsers like Google’s Gemini and Microsoft’s Copilot, enabling cybercriminals to exploit legitimate websites for attacks.

Israeli cybersecurity researchers have identified a critical vulnerability in popular AI-powered browsers that allows any legitimate website to be transformed into a potential hacking tool, without the need for attackers to breach the sites themselves. The discovery was made by the Cato CTRL research group of Cato Networks and involves widely used AI tools, including Google’s Gemini, Microsoft’s Copilot, and Perplexity’s Comet.

The research outlined a series of primary attack scenarios in which cybercriminals can manipulate AI assistants to display fake phone numbers and links when users request customer service contact information for various organizations. These scenarios could lead to the unauthorized extraction of sensitive user data, the theft of login credentials, dissemination of false information, and the creation of misleading narratives that could influence users’ decisions without their knowledge.

The technique leveraged by attackers is termed HashJack. This method requires the addition of malicious instructions to a legitimate website address, which are then distributed to potential victims. When a user accesses the modified website, the malicious prompts interact with smart AI assistants such as Gemini and Copilot, triggering the attack scenarios.

According to Cato Networks, traditional defense systems are unable to detect these attacks because they exploit prompts embedded in the website address after the hashtag symbol (#), a process that operates outside the browser’s visible work. This method capitalizes on users’ trust in legitimate websites, utilizing link addresses that appear credible, making it difficult for users to suspect any malicious intent, as opposed to traditional phishing sites that often raise red flags.

The ability of attackers to transform even legitimate sites into tools for malicious activities illustrates a new subcategory of cyber threats in the AI landscape. The implications of this vulnerability are significant, as it suggests that many trusted websites could unwittingly become vessels for cybercrime, all without the need for an actual breach of those sites.

Cato Networks has stated that they informed the companies whose tools were found to contain these vulnerabilities well in advance, allowing them to address the issues before user exposure. This proactive approach is often referred to in the cybersecurity field as “white hat hacking.” According to their data, a fix was applied to Microsoft’s Copilot for the Edge browser on October 27, 2025. In the Comet browser, the issue was reported to have been resolved on November 18, 2025. However, as of November 25, 2025, no resolution had been implemented for Gemini on Chrome.

The discovery highlights the ongoing challenges faced by both users and technology companies in maintaining cybersecurity in an increasingly complex digital landscape. As reliance on AI tools continues to grow, the need for robust protective measures becomes even more crucial, with the potential for new threats emerging alongside innovations. Stakeholders in the industry are expected to closely monitor these developments, as this vulnerability serves as a reminder of the inherent risks associated with the integration of AI technologies into everyday browsing experiences.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Technology

Cocoon launches a decentralized AI network on TON, enabling GPU owners to profit from rented computing power while prioritizing user privacy and data security.

Top Stories

Users accessing Perplexity.in are unexpectedly redirected to Google Gemini, highlighting a critical domain oversight as Perplexity focuses solely on its global domain.

AI Generative

Icaro Lab's study reveals that poetic phrasing enables a 62% success rate in bypassing safety measures in major LLMs from OpenAI, Google, and Anthropic.

Top Stories

Microsoft stock trades at 30x earnings, backed by a 40% revenue surge in cloud services, making it a compelling buy amid AI growth prospects.

AI Technology

Google introduces Private AI Compute, leveraging AMD's Trusted Execution Environment for enhanced data privacy, ensuring secure AI processing and user data protection.

AI Research

Researchers find that 62% of AI models from firms like Google and OpenAI bypass safety measures using poetic prompts to elicit harmful content.

AI Technology

Amazon, Meta, and other tech giants are set to raise nearly $100 billion in debt to fuel AI and cloud infrastructure, reflecting a critical...

AI Generative

Google restricts free access to its Nano Banana AI image generator to two images daily amid soaring demand, signaling challenges in scaling popular tech...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.