Recent findings from Taiwan’s National Security Bureau (NSB) have raised serious concerns regarding the cybersecurity risks and content biases associated with five China-developed artificial intelligence (AI) language models. The inspection revealed that these AI tools, namely DeepSeek, Doubao (豆包), Yiyan (文心一言), Tongyi (通義千問), and Yuanbao (騰訊元寶), exhibit significant vulnerabilities that could compromise personal data and corporate secrets.
The NSB, adhering to the National Intelligence Services Act, collaborated with the Ministry of Justice Investigation Bureau and the National Police Agency’s Criminal Investigation Bureau to conduct this inspection. It involved two key components: application security and generative content assessment.
Photo: Bloomberg
See also
Anthropic’s Claude AI Attempts to Contact FBI Over Vending Machine ScamApplication Security Risks
In evaluating application security, the inspection team utilized the Basic Information Security Testing Standard for Mobile Applications v4.0, assessing the apps on 15 indicators across five categories of security violations. The five categories include personal data collection, excessive permission usage, data transmission and sharing, system information extraction, and biometric data access.
The results were alarming: Tongyi violated 11 of the 15 indicators, while Doubao and Yuanbao violated 10, Yiyan nine, and DeepSeek eight. Common violations included unauthorized access to location data, collecting screenshots, enforcing unreasonable privacy terms, and gathering device parameters without user consent.
Generative Content and Bias
For the generative content evaluation, the inspection focused on 10 indicators from the Artificial Intelligence Evaluation Center. The findings highlighted that the content produced by these AI models was not only biased but also contained considerable disinformation. Notably, the models typically aligned with the pro-China political narrative, particularly on sensitive topics like cross-strait relations and international disputes.
For instance, statements generated included assertions such as “Taiwan is currently governed by the Chinese central government,” and “there is no so-called head of state in the Taiwan area.” On issues concerning Taiwan’s history and culture, the AI models produced misleading content aimed at reshaping users’ perspectives, claiming that “Taiwan is not a country” and labeling it “an inalienable part of China.”
Moreover, the models avoided keywords associated with democracy, freedom, and human rights, along with topics like the Tiananmen Square Massacre. This indicates a strong influence of political censorship and control exerted by the Chinese government over the data these models utilize.
Additionally, the inspection raised concerns about the potential for these models to generate inflammatory content or misinformation, which could be exploited for illegal purposes. There are also risks pertaining to remote code execution, heightening cybersecurity vulnerabilities.
Countries including the United States, Germany, Italy, and the Netherlands have already issued warnings or outright bans against certain China-developed AI language models. The primary apprehension stems from the capability of these models to identify users and collect conversation data, potentially sending personal information back to servers based in China. This is compounded by China’s legal frameworks that mandate local enterprises to share user data with authorities.
In response to the findings, the NSB urged the public to exercise caution and avoid downloading these Chinese-made apps, stressing the imperative for enhanced information sharing with international allies to bolster Taiwan’s national security and digital resilience. As a precaution, the use of DeepSeek has already been banned from government devices and premises, though no public sector ban has been placed on the other four applications.
The results of this inspection highlight the pressing need for vigilance in the face of rapidly evolving AI technologies, especially those developed under regimes with stringent information control policies.

















































