Researchers from ETH Zurich and AI company Anthropic have unveiled alarming findings regarding online pseudonymity, revealing that sophisticated AI models can easily unmask users behind pseudonymous accounts. The study, detailed in a yet-to-be-peer-reviewed paper, demonstrates that large language models (LLMs) can perform deanonymization at scale, a process that would typically require extensive and time-consuming effort by human investigators.
In their experiments, the team successfully identified two-thirds of users on popular forums such as Hacker News and Reddit solely based on their pseudonymous online interactions. “Our results show that the practical obscurity protecting pseudonymous users online no longer holds, and that threat models for online privacy need to be reconsidered,” the researchers stated.
Coauthor Simon Lermen, an AI engineer at ETH Zurich, explained that the method used in their research linked posts on Hacker News to LinkedIn profiles through references in user profiles. They began by anonymizing datasets from public social media sites before training the LLM to match the anonymized posts with their original authors. “What we found is that these AI agents can do something that was previously very difficult: starting from free text, they can work their way to the full identity of a person,” Lermen told Ars Technica.
The implications of these findings for online privacy are profound. The researchers highlighted that many internet users have operated under an implicit assumption that pseudonymity provides adequate protection, a notion now challenged by the capabilities of modern AI. Even when fed with general data, the AI could identify individuals around seven percent of the time, a figure Lermen described as significant. “It’s noteworthy that AI can do this at all,” he remarked.
In specialized contexts, such as film discussions on Reddit, the efficacy of the AI in deanonymizing users was found to increase substantially. However, the researchers acknowledged several limitations in their study, including the small sample sizes necessitated by the need for verified identity links and the challenge in distinguishing the AI’s contributions from those of web search systems.
The researchers cautioned that their findings pose serious risks for online anonymity and privacy. They warned that LLMs could be utilized by governments to link pseudonymous accounts to real identities, potentially enabling surveillance of dissidents, journalists, or activists. They also noted that corporations could connect seemingly anonymous posts to customer profiles for hyper-targeted advertising, while malicious actors could exploit these AI capabilities to conduct sophisticated social engineering scams.
As the landscape of online privacy continues to evolve, the researchers call for a reevaluation of the assumptions underpinning the current understanding of internet safety. “Users, platforms, and policymakers must recognize that the privacy assumptions underlying much of today’s internet no longer hold,” they conclude. The advent of AI technology brings both innovative solutions and significant challenges, underscoring the need for enhanced safety measures to protect users in a digital age increasingly characterized by a lack of anonymity.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature

















































