OX Security has raised alarms over two fraudulent browser extensions that impersonate the legitimate AI tool AITOPIA, following findings released on December 31. The counterfeit extensions, which mimicked AITOPIA’s branding and user interface, misled users by retaining familiar features, including an AI sidebar, while embedding hidden data collection routines in the background.
The researchers identified two specific listings, one titled “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” and the other “AI Sidebar with Deepseek, ChatGPT, Claude, and more.” As of the end of December, the first extension had garnered around 600,000 users, while the second attracted approximately 300,000 users. Notably, the first listing also displayed a Google “Featured” badge, which may have further contributed to its deceptive credibility.
In contrast to the fraudulent applications, the genuine AITOPIA extension clearly states that it stores chats generated through its own sidebar. Researchers indicated that the counterfeit versions, however, extracted conversation text from third-party platforms, including ChatGPT and DeepSeek, raising significant concerns about user privacy and data security.
The rise of such fraudulent extensions highlights an ongoing issue in the rapidly expanding landscape of AI tools, where user trust is paramount. As consumers increasingly rely on AI-driven applications for various tasks, the potential for exploitation by malicious actors also rises. The ease with which these counterfeit tools can capture sensitive information poses a serious threat, not only to individual users but also to the broader perception of AI technologies.
Regulatory bodies and technology companies are under pressure to enhance security measures to protect users from deceptive practices. As instances of such fraudulent tools proliferate, the need for improved vetting processes within platforms like the Chrome Web Store becomes increasingly critical. OX Security’s findings serve as a timely reminder of the vulnerabilities that exist within the digital ecosystem and the importance of vigilance among users when selecting AI applications.
In conclusion, as the AI sector continues to grow, the challenges surrounding security and trust will likely intensify, necessitating collaborative efforts between developers, platform providers, and regulatory agencies to safeguard users from potential threats. Ensuring a secure environment for AI usage will be essential in fostering confidence among consumers and promoting the responsible development of technology.
See also
U.S. Faces AI Showdown with China: Leadership Crucial for Global Values and Security
Half of Young Adults Use AI for Mental Health Support, Raising Privacy Concerns
China Unleashes Advanced AI System to Accelerate Scientific Research Amid U.S. Competition
Accenture’s AI Strategy Shift: Integrating AI as Core Business, Not a Side Project



















































