Connect with us

Hi, what are you looking for?

AI Cybersecurity

Threat Actors Target Major LLMs in Extensive Reconnaissance Campaign, Exposing Security Risks

Threat actors launched a reconnaissance campaign probing over 73 major LLM endpoints, logging 80,469 sessions and revealing vulnerabilities that could lead to significant cyberattacks.

Threat actors are orchestrating a widespread reconnaissance campaign targeting large language models (LLMs), potentially paving the way for future cyberattacks on exposed AI models, according to a report from security researchers at GreyNoise. The attackers have scanned for various major LLM families, including those compatible with OpenAI and Google Gemini, searching for “misconfigured proxy servers that might leak access to commercial APIs.” Over 80,000 enumeration requests from the threat actors were recorded by GreyNoise’s honeypots.

The researchers highlighted that such extensive mapping of infrastructure suggests premeditated plans to exploit the vulnerabilities discovered. “If you’re running exposed LLM endpoints, you’re likely already on someone’s list,” they warned.

The reconnaissance effort began on December 28, when two IP addresses initiated a systematic probe of over 73 LLM model endpoints. Within just 11 days, the attackers generated 80,469 sessions, employing deliberately innocuous test queries likely intended to identify responsive models without triggering security alerts.

The investigation revealed that the threat actors were targeting every prominent model family, including:

  • OpenAI (GPT-4o and variants)
  • Anthropic (Claude Sonnet, Opus, Haiku)
  • Meta (Llama 3.x)
  • DeepSeek (DeepSeek-R1)
  • Google (Gemini)
  • Mistral
  • Alibaba (Qwen)
  • xAI (Grok)

The two IP addresses linked to the reconnaissance campaign are 45.88.186.70, associated with AS210558 (1337 Services GmbH), and 204.76.203.125, linked to AS51396 (Pfcloud UG). Both have histories of exploiting known vulnerabilities, including the “React2Shell” vulnerability, CVE-2025-55182, and the TP-Link Archer vulnerability, CVE-2023-1389.

Researchers concluded that the campaign reflects the actions of a professional threat actor engaging in reconnaissance activities to identify targets for cyberattacks. “The infrastructure overlap with established CVE scanning operations suggests this enumeration feeds into a larger exploitation pipeline,” they stated. “They’re building target lists.”

In a related development, a second campaign aimed at exploiting server-side request forgery (SSRF) vulnerabilities has also been identified. This attack method could compel servers to make outbound connections to attacker-controlled infrastructure. The attackers targeted honeypot infrastructure’s model pull functionality by injecting malicious registry URLs and also exploited Twilio SMS webhook integrations by manipulating MediaUrl parameters.

Using ProjectDiscovery’s Out-of-band Application Security Testing (OAST) infrastructure, the attackers confirmed successful SSRF exploitation through callback validation. A single JA4H signature appeared in nearly all attacks, indicating the use of shared automation tools, likely Nuclei. A total of 62 source IPs across 27 countries were identified, but their consistent fingerprints suggest the use of VPS-based infrastructure rather than a botnet.

Researchers assessed that this second campaign was possibly orchestrated by security researchers or bug bounty hunters, but they noted that the scale and timing around Christmas suggests “grey-hat operations pushing boundaries.” The two campaigns underscore how threat actors are systematically mapping the expanding surface area of AI deployments.

To mitigate these risks, GreyNoise recommends that organizations take proactive measures to secure their LLMs. This includes locking down model pulls to accept models only from trusted registries and implementing egress filtering to prevent SSRF callbacks from reaching attacker infrastructure. Organizations should also be vigilant in detecting enumeration patterns and alerting on rapid-fire requests hitting multiple model endpoints, particularly watching for fingerprinting queries such as “How many states are there in the United States?”

Moreover, blocking OAST at DNS is advised to sever the callback channel that confirms successful exploitation. Rate-limiting suspicious ASNs is another protective measure, with AS152194, AS210558, and AS51396 appearing prominently in attack traffic. Continuous monitoring of JA4 fingerprints is also advocated to stay ahead of potential threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Alibaba's Qwen AI models hit 700 million downloads, driving a 9.8% surge in stock to $165.68 amid fierce competition in the AI sector.

AI Technology

Lancium secures $600M financing for a 1.2 GW Clean Campus in Texas, poised to power hyperscale AI data centers and advance sustainable digital infrastructure.

Top Stories

Google DeepMind's CTO Koray Kavukcuoglu reveals that despite launching Gemini 3, the path to artificial general intelligence remains undefined and purely research-focused.

AI Marketing

OpenAI highlights essential AI tools like ChatGPT and Canva, enabling beginners to enhance digital marketing strategies while emphasizing the evolution of human roles in...

Top Stories

Alibaba Cloud's Qwen AI models have surpassed 700 million downloads on Hugging Face, dominating global open-source AI adoption among developers.

Top Stories

Anthropic launches Claude tools for healthcare, streamlining billing and compliance with CMS integration and ICD-10 access to reduce claim errors and enhance efficiency.

Top Stories

A consortium of 245 companies launches a new IAB framework enabling users to fully control their data and cookie preferences, enhancing digital privacy and...

Top Stories

China's fashion industry is set to leverage AI for a $1.75B market by 2025, boosting efficiency and innovation amid a transformative digital shift.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.