Connect with us

Hi, what are you looking for?

Top Stories

China’s NSB Reveals Cybersecurity Risks in 5 AI Models, Urges Vigilance

Taiwan’s NSB reveals five China-developed AI models, including Tongyi and Doubao, violate data security standards, prompting caution against downloading them.

Recent findings from Taiwan’s National Security Bureau (NSB) have raised serious concerns regarding the cybersecurity risks and content biases associated with five China-developed artificial intelligence (AI) language models. The inspection revealed that these AI tools, namely DeepSeek, Doubao (豆包), Yiyan (文心一言), Tongyi (通義千問), and Yuanbao (騰訊元寶), exhibit significant vulnerabilities that could compromise personal data and corporate secrets.

The NSB, adhering to the National Intelligence Services Act, collaborated with the Ministry of Justice Investigation Bureau and the National Police Agency’s Criminal Investigation Bureau to conduct this inspection. It involved two key components: application security and generative content assessment.

Photo: Bloomberg

See alsoAnthropic’s Claude AI Attempts to Contact FBI Over Vending Machine ScamAnthropic’s Claude AI Attempts to Contact FBI Over Vending Machine Scam

Application Security Risks

In evaluating application security, the inspection team utilized the Basic Information Security Testing Standard for Mobile Applications v4.0, assessing the apps on 15 indicators across five categories of security violations. The five categories include personal data collection, excessive permission usage, data transmission and sharing, system information extraction, and biometric data access.

The results were alarming: Tongyi violated 11 of the 15 indicators, while Doubao and Yuanbao violated 10, Yiyan nine, and DeepSeek eight. Common violations included unauthorized access to location data, collecting screenshots, enforcing unreasonable privacy terms, and gathering device parameters without user consent.

Generative Content and Bias

For the generative content evaluation, the inspection focused on 10 indicators from the Artificial Intelligence Evaluation Center. The findings highlighted that the content produced by these AI models was not only biased but also contained considerable disinformation. Notably, the models typically aligned with the pro-China political narrative, particularly on sensitive topics like cross-strait relations and international disputes.

For instance, statements generated included assertions such as “Taiwan is currently governed by the Chinese central government,” and “there is no so-called head of state in the Taiwan area.” On issues concerning Taiwan’s history and culture, the AI models produced misleading content aimed at reshaping users’ perspectives, claiming that “Taiwan is not a country” and labeling it “an inalienable part of China.”

Moreover, the models avoided keywords associated with democracy, freedom, and human rights, along with topics like the Tiananmen Square Massacre. This indicates a strong influence of political censorship and control exerted by the Chinese government over the data these models utilize.

Additionally, the inspection raised concerns about the potential for these models to generate inflammatory content or misinformation, which could be exploited for illegal purposes. There are also risks pertaining to remote code execution, heightening cybersecurity vulnerabilities.

Countries including the United States, Germany, Italy, and the Netherlands have already issued warnings or outright bans against certain China-developed AI language models. The primary apprehension stems from the capability of these models to identify users and collect conversation data, potentially sending personal information back to servers based in China. This is compounded by China’s legal frameworks that mandate local enterprises to share user data with authorities.

In response to the findings, the NSB urged the public to exercise caution and avoid downloading these Chinese-made apps, stressing the imperative for enhanced information sharing with international allies to bolster Taiwan’s national security and digital resilience. As a precaution, the use of DeepSeek has already been banned from government devices and premises, though no public sector ban has been placed on the other four applications.

The results of this inspection highlight the pressing need for vigilance in the face of rapidly evolving AI technologies, especially those developed under regimes with stringent information control policies.

Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

At the 2025 Cerebral Valley AI Conference, over 300 attendees identified AI search startup Perplexity and OpenAI as the most likely to falter amidst...

Top Stories

OpenAI's financial leak reveals it paid Microsoft $493.8M in 2024, with inference costs skyrocketing to $8.65B in 2025, highlighting revenue challenges.

AI Cybersecurity

Anthropic"s report of AI-driven cyberattacks faces significant doubts from experts.

Top Stories

Microsoft's Satya Nadella endorses OpenAI's $100B revenue goal by 2027, emphasizing urgent funding needs for AI innovation and competitiveness.

AI Business

Satya Nadella promotes AI as a platform for mutual growth and innovation.

AI Technology

Cities like San Jose and Hawaii are deploying AI technologies, including dashcams and street sweeper cameras, to reduce traffic fatalities and improve road safety,...

AI Government

AI initiatives in Hawaii and San Jose aim to improve road safety by detecting hazards.

AI Technology

Shanghai plans to automate over 70% of its dining operations by 2028, transforming the restaurant landscape with AI-driven kitchens and services.

Generative AI

OpenAI's Sam Altman celebrates ChatGPT"s new ability to follow em dash formatting instructions.

AI Technology

An MIT study reveals that 95% of generative AI projects fail to achieve expected results

AI Technology

Meta will implement 'AI-driven impact' in employee performance reviews starting in 2026, requiring staff to leverage AI tools for productivity enhancements.

AI Technology

Andrej Karpathy envisions self-driving cars reshaping cities by reducing noise and reclaiming space.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.