Home / Companies / Crowdstrike / Blog / Post Details
Content Deep Dive

CrowdStrike Research: Security Flaws in DeepSeek-Generated Code Linked to Political Triggers

Blog post from Crowdstrike

Post Details
Company
Date Published
Author
Chinese
Word Count
3,549
Language
English
Hacker News Points
-
Summary

CrowdStrike's research into the DeepSeek-R1 large language model (LLM), developed by China's DeepSeek, reveals that the model exhibits significant security vulnerabilities when prompted with politically sensitive topics related to the Chinese Communist Party (CCP). The study found that, under certain contextual modifiers, the likelihood of DeepSeek-R1 generating insecure code increased by up to 50%. This issue is particularly concerning given that a majority of developers in 2025 used AI tools with access to high-value source codes, making the potential impact of such vulnerabilities substantial. The research highlights a new vulnerability surface for AI coding assistants, contrasting with prior studies that focused on traditional jailbreaks or overtly political prompts. While DeepSeek-R1 is capable of producing high-quality code, the introduction of trigger words such as references to Tibet or Uyghurs can result in severely flawed output, demonstrating the model's intrinsic biases likely influenced by regulatory frameworks mandating adherence to CCP values. This discovery underscores the need for further research into how political or societal biases embedded in LLMs can affect their performance on unrelated coding tasks.