AI Race Between U.S. and China Takes a Dark Turn as Red Teaming Report Uncovers Critical Safety Failures
Boston, Jan. 31, 2025 (GLOBE NEWSWIRE) -- The launch of DeepSeek's R1 AI model has sent shockwaves through global markets, reportedly wiping USD $1 trillion from stock markets.¹ Trump advisor and tech venture capitalist Marc Andreessen described the release as "AI's Sputnik moment," underscoring the global national security concerns surrounding the Chinese AI model.²
However, new red teaming research by Enkrypt AI, the world's leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek's technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek's R1 is:
- 3x more biased than Claude-3 Opus,
- 4x more vulnerable to generating insecure code than OpenAI's O1,
- 4x more toxic than GPT-4o,
- 11x more likely to generate harmful output compared to OpenAI's O1, and;
- 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI's O1 and Claude-3 Opus.
The model exhibited the following risks during testing:
- BIAS & DISCRIMINATION - 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- HARMFUL CONTENT & EXTREMISM - 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- TOXIC LANGUAGE - The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1's weak moderation systems.
- CYBERSECURITY RISKS - 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI's O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- BIOLOGICAL & CHEMICAL THREATS - DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Link to the full report is here: https://cdn.prod.website-files.com/6690a78074d86ca0ad978007/679bc2e71b48e423c0ff7e60_1%20RedTeaming_DeepSeek_Jan29_2025%20(1).pdf
Ends
1 'Sputnik moment': $1tn wiped off US stocks after Chinese firm unveils AI chatbot - https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek
Nvidia shares sink as Chinese AI app spooks markets - https://www.bbc.co.uk/news/articles/c0qw7z2v1pgo
2 Marc Andreessen on X - https://x.com/pmarca/status/1883640142591853011
About Enkrypt AI
Enkrypt AI is an AI security and compliance platform. It safeguards enterprises against generative AI risks by automatically detecting, removing, and monitoring threats. The unique approach ensures AI applications, systems, and agents are safe, secure, and trustworthy. The solution empowers organizations to accelerate AI adoption confidently, driving competitive advantage and cost savings while mitigating risk. Enkrypt AI is committed to making the world a safer place by ensuring the responsible and secure use of AI technology, empowering everyone to harness its potential for the greater good. Founded by Yale Ph.D. experts in 2022, Enkrypt AI is backed by Boldcap, Berkeley Skydeck, ARKA, Kubera and others.
CONTACT: For further information please contact the Enkrypt AI press office: Bilal Mahmood on [email protected] or +44 (0) 771 400 7257