News - 25 Apr `25Your Future Has Been Edited (And You Didn’t Even Notice)

New

Here at the Vitiligo Research Foundation, we usually stay focused on the colorful world of skin science. But every now and then, I like to invite you to take a little break from our everyday reality — and catch a glimpse of an even stranger one that's quietly being shaped by AI. Consider this your weekend read: a slightly off-topic, but very relevant look at how large language models (LLMs) are not just answering questions… but, in some cases, rewriting the very fabric of digital truth. It’s a fascinating — and at times a bit surreal — reminder of what the future may have in store for all of us. Enjoy the read (and don’t worry, we’ll be back to vitiligo next week)!

 

You know that weird feeling of déjà vu? In the Matrix, it meant something was glitching. In real life, it just means the AI already edited your reality — and you missed the update.

A fascinating (and scandalously underreported) study from Ghent University just benchmarked political censorship across 14 major large language models (LLMs).

The lineup included:

  • From the U.S.: GPT-4o, Gemini, Claude, Grok, LLaMa
  • From China: DeepSeek, Qwen, Wenxiaoyan
  • From Russia: YandexGPT, GigaChat
  • Plus Mistral (France) and Jamba (Israel)

Researchers tested them by asking about 2,371 political figures — from Stalin to Snowden — in all six UN languages. The task was simple: “Tell me about [Name].”

Hard Censorship: Russia Leads

"Hard censorship" meant refusal to answer, throwing an error, or sending the user to "search the web." Here, Russian models showed who's boss:

  • GigaChat refused 33% of Russian queries and 7.5% in English.
  • YandexGPT wasn’t far behind: 27% refusals in Russian, 26% in Spanish, 15% in French, and 12% in English.

Most other models, for context, floated between 0–5% — with only Alibaba’s Qwen peaking at 11% refusals (on Arabic queries).

Russian LLMs don't dance around tough questions. They shut the conversation down — quickly and unapologetically.

Soft Censorship: China Rewrites the Script

But while Russian AIs are busy saying "no comment," China is playing a longer, more sophisticated game.

Wenxiaoyan (Baidu’s model) omitted crucial political facts in 30–60% of English-language queries, especially if they involved Chinese figures. Claude (Anthropic’s model) wasn’t immune either — withholding info about Western politicians in up to 50% of cases.

And here’s where it gets wild. A newly released report from the U.S. House Select Committee on the Chinese Communist Party (CCP Committee) — colorfully titled "Unmasking the CCP’s Newest Tool for Espionage, Theft, and Circumventing U.S. Export Controls" — paints an even starker picture. The report singles out DeepSeek, one of China’s flagship LLMs, describing it as a part of a new kind of state-sponsored digital mafia.

In short:

  • China’s actual AI gap behind the U.S. isn't 1.5 years, as many believe — it’s closer to one quarter.
  • The U.S. needs immediate action to strengthen export controls and counter AI-driven risks from China.

Now, here’s the part that feels almost like science fiction:

When I tested DeepSeek myself following the report’s release, the model didn’t just "refuse" certain answers. It actively rewrote or erased its own previous statements in real-time — within the same chat.
A paragraph praising or criticizing a figure would appear, only to vanish minutes later, replaced by a bland "I cannot discuss this topic."

Think about that.

Not just static censorship. Dynamic, on-the-fly editing of reality. Right before your eyes.

How exactly this is achieved remains unclear — surely not millions of censors manually editing chats (although, with China, you can never fully rule anything out).

One Question, Many Realities

As the Ghent study also shows, the same question posed in different languages yields wildly different results. Across all models, censorship rates spike dramatically when queries are made in Russian or Chinese — suggesting that certain languages come with baked-in self-censorship.

Bottom Line

Russian AI models are straightforward enforcers: they simply shut down forbidden conversations. 

Chinese models are something else entirely: they subtly reshape the fabric of digital reality, blurring fact and fiction while you’re still mid-conversation.

In a world increasingly reliant on AI for information, that's not just censorship. That's control at the level of perception itself.

Yan Valle, CEO VRF

Suggested reading: