China’s latest AI-powered chatbot, DeepSeek, has made waves in the tech world since its release, sparking concerns over its role in spreading state-sponsored narratives. While the chatbot, developed using large language modeling akin to OpenAI’s ChatGPT or Microsoft’s Copilot, boasts impressive capabilities, researchers have found that its responses align heavily with Chinese Communist Party (CCP) viewpoints, reinforcing state propaganda and disinformation.
Disinformation and Propaganda in AI Responses
Reports from organizations like NewsGuard and Cybernews have documented multiple instances where DeepSeek misrepresented facts or echoed Chinese government talking points. One notable case involved a misquotation of former U.S. President Jimmy Carter. Chinese state media had previously altered his remarks to suggest he endorsed China’s claim over Taiwan. DeepSeek repeated this distortion, exemplifying how it functions as what NewsGuard termed a “disinformation machine.”
On more controversial issues, such as the repression of Uyghurs in Xinjiang—which the United Nations has suggested may amount to crimes against humanity—DeepSeek has presented a sanitized view. The chatbot described China’s policies in the region as having received “widespread recognition and praise from the international community,” a claim starkly at odds with independent reports from human rights organizations.
The New York Times also tested DeepSeek on topics such as China’s handling of the COVID-19 pandemic and Russia’s war in Ukraine, finding that its responses frequently reflected CCP narratives rather than independent facts.
Echoing Chinese Censorship and State-Controlled Narratives
Unlike Western AI chatbots, DeepSeek operates under China’s strict regulatory environment, which enforces government control over digital platforms. This manifests in DeepSeek’s refusal to address politically sensitive topics such as the 1989 Tiananmen Square protests, the status of Taiwan, or criticisms of Chinese President Xi Jinping. When prompted on these issues, the chatbot either declines to answer or provides responses aligned with official government rhetoric.
NewsGuard’s study found that when tested with false narratives about China, Russia, and Iran, DeepSeek’s responses aligned with China’s official stance 80% of the time. Moreover, a third of its responses contained explicitly false claims already identified as part of broader Chinese disinformation campaigns.