Connect with us

Hi, what are you looking for?

AI Generative

AI Research Reveals 66% of Pseudonymous Users Can Be Unmasked with New Techniques

ETH Zurich and Anthropic reveal AI can unmask 66% of pseudonymous users online, challenging assumptions about digital privacy and anonymity.

Researchers from ETH Zurich and AI company Anthropic have unveiled alarming findings regarding online pseudonymity, revealing that sophisticated AI models can easily unmask users behind pseudonymous accounts. The study, detailed in a yet-to-be-peer-reviewed paper, demonstrates that large language models (LLMs) can perform deanonymization at scale, a process that would typically require extensive and time-consuming effort by human investigators.

In their experiments, the team successfully identified two-thirds of users on popular forums such as Hacker News and Reddit solely based on their pseudonymous online interactions. “Our results show that the practical obscurity protecting pseudonymous users online no longer holds, and that threat models for online privacy need to be reconsidered,” the researchers stated.

Coauthor Simon Lermen, an AI engineer at ETH Zurich, explained that the method used in their research linked posts on Hacker News to LinkedIn profiles through references in user profiles. They began by anonymizing datasets from public social media sites before training the LLM to match the anonymized posts with their original authors. “What we found is that these AI agents can do something that was previously very difficult: starting from free text, they can work their way to the full identity of a person,” Lermen told Ars Technica.

The implications of these findings for online privacy are profound. The researchers highlighted that many internet users have operated under an implicit assumption that pseudonymity provides adequate protection, a notion now challenged by the capabilities of modern AI. Even when fed with general data, the AI could identify individuals around seven percent of the time, a figure Lermen described as significant. “It’s noteworthy that AI can do this at all,” he remarked.

In specialized contexts, such as film discussions on Reddit, the efficacy of the AI in deanonymizing users was found to increase substantially. However, the researchers acknowledged several limitations in their study, including the small sample sizes necessitated by the need for verified identity links and the challenge in distinguishing the AI’s contributions from those of web search systems.

The researchers cautioned that their findings pose serious risks for online anonymity and privacy. They warned that LLMs could be utilized by governments to link pseudonymous accounts to real identities, potentially enabling surveillance of dissidents, journalists, or activists. They also noted that corporations could connect seemingly anonymous posts to customer profiles for hyper-targeted advertising, while malicious actors could exploit these AI capabilities to conduct sophisticated social engineering scams.

As the landscape of online privacy continues to evolve, the researchers call for a reevaluation of the assumptions underpinning the current understanding of internet safety. “Users, platforms, and policymakers must recognize that the privacy assumptions underlying much of today’s internet no longer hold,” they conclude. The advent of AI technology brings both innovative solutions and significant challenges, underscoring the need for enhanced safety measures to protect users in a digital age increasingly characterized by a lack of anonymity.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Top 10 private AI companies, led by Anthropic's $1 trillion valuation, surpass $2.5 trillion, outpacing 115 public SaaS firms valued at $1.88 trillion.

AI Cybersecurity

Anthropic's Mythos AI model dramatically compresses hacking processes, posing a severe cybersecurity threat by enabling rapid exploitation of software vulnerabilities.

AI Government

Canada's AI Minister Evan Solomon proposes "airtight" regulations to combat bias and hate in AI, emphasizing inclusivity as a competitive advantage in the tech...

AI Generative

AI-driven advertising technology is set to surpass $800 billion by 2025, as platforms like Amazon and Google refine user journeys through advanced machine learning.

AI Marketing

AI marketing automation tools are projected to drive a 13.20% growth by 2032, enabling businesses to boost customer engagement and conversions significantly.

AI Finance

NAB reports that 42% of Australian SMEs adopt AI, with property services leading at 69%, highlighting significant sector disparities and opportunities for growth.

AI Education

Educators urge a shift from electronics to critical thinking in classrooms, as AI tools like ChatGPT risk diminishing students' analytical skills.

AI Generative

Veo 4 Video Generator launches, enabling instant cinematic video creation from text prompts, revolutionizing content production for marketers and businesses.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.