DeepSeek R1 on Perplexity: A Secure Yet Censored Experience

Listen to this Post

2025-01-30

DeepSeek AI, a Chinese startup, has recently captured attention with its open-source language models, DeepSeek R1 and V3. These models, touted as comparable to OpenAI’s GPT-4 and Anthropic’s Claude, have sparked concerns regarding censorship, data privacy, and security risks due to their connection to Chinese servers.

To address these risks, AI search platforms like Perplexity and You.com have integrated DeepSeek models while ensuring user data remains within Western servers. However, censorship within these models remains a challenge despite efforts to remove restrictions.

Summary

  • DeepSeek R1 and V3 are open-source AI models that have stirred discussions around security and censorship.
  • Perplexity now hosts DeepSeek R1, allowing limited free access and full access via a $20/month Pro plan.
  • The company claims to host the model within US/EU servers, ensuring data privacy and security.
  • DeepSeek AI’s native assistant, running on Chinese servers, presents security risks for users.
  • While Perplexity has attempted to remove censorship, the model still avoids politically sensitive topics.
  • You.com also offers R1 and V3 in its $15/month Pro plan, with hosting confined to US servers.
  • The platform provides three ways to use the model, including options for users to adjust its reliance on public web sources.
  • When public sources are enabled, responses appear more neutral, but turning them off results in censorship on certain topics.

What Undercode Says:

Perplexity’s Approach: A Partial Solution

Perplexity’s decision to host DeepSeek R1 within US and EU data centers is a significant step toward addressing data security concerns. By ensuring user data does not leave Western servers, the platform mitigates potential risks associated with Chinese-government access. However, this does not eliminate the deeper issue of model censorship. While some restrictions may have been lifted, the refusal to discuss topics like Tiananmen Square suggests that DeepSeek R1 retains some level of built-in content moderation.

Censorship: A Persistent Challenge

Even though Perplexity claims to have removed censorship constraints, tests show that the model still avoids politically sensitive subjects. This raises questions about whether the censorship is hardcoded into the model at a fundamental level, making it difficult to override. The DeepSeek AI team likely embedded these restrictions into the pre-training phase, rather than applying them as external filtering layers, making them more challenging to remove completely.

You.com’s Model Adjustments

You.com provides a more flexible approach, allowing users to tweak how DeepSeek R1 interacts with web sources. Their “trust layer” integration encourages citation of public sources, which can help counteract pre-existing biases in the model. However, when this layer is disabled, the AI reportedly exhibits a noticeable reluctance to answer politically sensitive questions, indicating that the model’s foundational training still adheres to restrictions.

Privacy vs. Performance

The growing demand for open-source AI models highlights the need for balance between security and performance. While Perplexity and You.com offer safer alternatives by keeping data within Western-controlled infrastructure, the censorship debate underscores a fundamental limitation of AI development in controlled environments. A truly open AI model should be free from geopolitical influences, yet DeepSeek R1 remains entangled in governmental control mechanisms.

Future Implications

DeepSeek’s rapid rise suggests that China is positioning itself as a serious competitor in the AI space. However, the challenges of censorship and data security may hinder widespread global adoption of its models. The issue also extends beyond DeepSeek—any AI developed within restrictive environments faces credibility challenges regarding its neutrality.

As AI platforms continue integrating foreign models, users must remain aware of their potential limitations. While Perplexity and You.com have taken steps to mitigate security risks, they cannot fully erase the underlying biases ingrained within DeepSeek’s training data. For now, users seeking a censorship-free AI experience may still find Western-developed models to be more transparent and reliable.

References:

Reported By: https://www.zdnet.com/article/perplexity-lets-you-try-deepseek-r1-without-the-security-risk-but-its-still-censored/
https://www.twitter.com
Wikipedia: https://www.wikipedia.org
Undercode AI: https://ai.undercodetesting.com

Image Source:

OpenAI: https://craiyon.com
Undercode AI DI v2: https://ai.undercode.helpFeatured Image