Connect with us

Hi, what are you looking for?

AI Research

Anthropic Reveals AI’s Ability to Deanonymize Online Accounts with 68% Recall Rate

Anthropic and ETH Zurich reveal AI can deanonymize online accounts with a remarkable 68% recall rate, raising privacy concerns across digital platforms.

New research from scientists at Anthropic and ETH Zurich reveals that modern artificial intelligence systems may be capable of uncovering the real identities behind anonymous online accounts. Published as a preprint on arXiv, the study demonstrates that large language models (LLMs) can analyze online behavior and connect pseudonymous profiles with actual individuals, potentially at scale.

Titled “Large-scale online deanonymization with LLMs,” the research focuses on how AI systems can automate the process of deanonymization—linking anonymous online accounts to their real-world identities. Historically, this intricate task required extensive manual investigation, where analysts sifted through posts, writing styles, and various digital clues. The researchers’ findings suggest that advanced AI models can perform many of these tasks automatically, enhancing efficiency.

In the study, the AI analyzed public text from various online platforms, extracting identity-related signals such as personal interests, demographic hints, writing styles, and incidental details contained in posts. The AI then scoured the internet for matching profiles, assessing whether the inferred clues corresponded with known individuals.

To evaluate their method, the researchers created multiple datasets containing known identities. One experiment involved matching users from Hacker News to their LinkedIn profiles, even after obvious identifiers like names and usernames were omitted. Another dataset focused on linking pseudonymous accounts across different Reddit communities. A third experiment divided a single user’s posting history into two separate profiles to assess whether the AI could discern that they belonged to the same person.

The results indicated that LLM-based systems significantly outperformed traditional deanonymization techniques, achieving up to 68% recall with approximately 90% precision. This means the AI was able to accurately identify a substantial number of accounts while maintaining a relatively low error rate, whereas conventional methods yielded nearly zero success in similar tests.

The researchers noted that these findings underscore how AI can replicate tasks previously requiring hours of work from human investigators. An AI system can automatically extract relevant characteristics from text, search for potential matches among thousands of profiles, and evaluate which candidate is most likely to be correct.

This advancement raises significant concerns regarding online anonymity, a vital protection for many users, including journalists, whistleblowers, activists, and ordinary individuals wishing to discuss sensitive topics without revealing their identities. The study suggests that this essential layer of protection—often referred to as “practical obscurity”—may be diminishing as AI systems become increasingly adept at connecting digital clues across various platforms. If automated tools can rapidly and economically conduct this work, the barrier to identifying anonymous users could be significantly lowered.

Researchers estimate that the cost of identifying an online account using their experimental approach could be between $1 and $4 per profile, making large-scale investigations feasible at a relatively low cost. However, they caution that the study was conducted in controlled environments using public data. The findings have not yet undergone peer review, and the researchers intentionally withheld certain technical details to mitigate the risks of misuse.

Despite these precautions, the results have ignited debate among privacy advocates and technologists. The implications suggest that individuals may need to reconsider the amount of personal information they share online, even in spaces that seem anonymous. Looking ahead, the researchers emphasize the need for further exploration of both the risks associated with AI-powered deanonymization and potential defenses against it. This could involve enhancing privacy tools, improving platform safeguards, or developing AI systems designed to anonymize sensitive data before public sharing.

As artificial intelligence continues to evolve and its capabilities grow, this study highlights an escalating challenge: striking a balance between the power of AI-driven discovery and the imperative to protect personal privacy in the digital landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Anthropic's Claude model has identified over 1,000 zero-day vulnerabilities in major software systems, revolutionizing cybersecurity and defense strategies.

AI Cybersecurity

Asian banks heighten cybersecurity measures as Anthropic’s Mythos tool uncovers thousands of vulnerabilities, prompting major institutions to reassess AI risks.

Top Stories

Anthropic shares soar amid a frenzy of offers exceeding $1 trillion, as investors compete aggressively for stakes in the AI powerhouse.

AI Regulation

AI safety standards are at risk as Anthropic and OpenAI cut safety commitments amid competition, despite 80% of U.S. adults prioritizing regulation over innovation...

Top Stories

Anthropic aims for a staggering $1 trillion valuation in its upcoming funding round, potentially surpassing OpenAI's recent $852 billion mark amidst regulatory challenges.

AI Research

Oxford researchers find friendly AI chatbots are 30% less accurate and 40% more likely to support conspiracy theories, raising concerns over reliability.

Top Stories

Regulators' AI adoption lags behind financial firms, with only 20% advanced initiatives, risking global stability as reliance on AI providers like OpenAI grows.

Top Stories

Anthropic pledges €240,000 annually to the Blender Development Fund, enhancing Python API support and integrating its Claude AI with Blender software.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.