News - 05 Nov `25Doctor GPT Is Officially Over

New

ChatGPT just stopped giving personalized medical and legal advice. Here’s why AI went silent exactly where mistakes cost the most — and what the vitiligo.ai team learned after two years on the front line

The Flatline

Well, that’s it. The “Doctor GPT” experiment just flatlined.

As of October 29, ChatGPT stopped giving personalized medical advice that requires a license. It can still explain how your immune system throws tantrums, or politely remind you to see a real doctor — but it’s done prescribing pills or telling you what to do when life bites you in the rear.

From now on, it’s an educational tool — a smart one, sure, but with a team of lawyers whispering “please don’t sue us” behind every sentence.

The fine print says it all: “Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Translation: “We’d rather not end up in court.

The Swagger Problem

And honestly? That’s fair. Because lately, AI’s been acting like it owns a stethoscope — prescribing with the swagger of a Mad Men ad exec selling snake oil, or giving legal counsel like it just binge-watched Suits.

What We Learned at vitiligo.ai

We at vitiligo.ai learned that lesson the messy way. After two years of experiments, we stopped probing whether AI could responcibly handle psychological support — spoiler: it can’t.

Sure, bots like ChatGPT or Grok can spot vitiligo in photos with about 80 percent accuracy (study link). But they can’t feel skin, emotion, or context. And the more complex the case, the dumber the confidence gets. Without real empathy, medicine turns into a guessing game with good UI.

We’ve always shared our story openly — the good, the bad, the “what the hell were we thinking.” Sometimes it hurt our PR, but who cares about image when lives are on the line? Truth ages better than polish.

OpenAI’s New Guardrails

Meanwhile, OpenAI’s been busy patching its conscience. It says it worked with 170 mental-health experts to make ChatGPT better at recognizing distress and steering users toward breaks or real help.

Noble idea — though mildly ironic, since CEO Sam Altman recently bragged about relaxing the same guardrails that are now back in full force.

Now, if you bring up psychosis, mania, or anything that sounds remotely “too human,” the bot politely bails out of the chat. That’s progress, I guess — just not the kind that helps when you actually need to talk to someone.

The Reality Check

The industry’s finally catching up to reality: a human doctor can work with an AI copilot — not the other way around.

And maybe it’s no coincidence that ChatGPT chose to retreat exactly where the cost of a mistake is life or freedom.

Doctors and lawyers don’t screw up often, but when they do — it makes headlines.

The Wild West Ahead

Of course, a new wave of fearless, unregulated bots is already lining up to fill the void — eager to “prescribe,” “advise,” and “predict” without an ounce of accountability.

So maybe it’s time to trust being human again — the oldest, glitchiest, yet still most reliable tech on Earth.

It’s imperfect, emotional, occasionally dumb… but at least it knows when to doubt itself.

— Yan Valle

Prof., CEO, Vitiligo Research Foundation | Author, A No-Nonsense Guide to Vitiligo

Read more:

 



      FAQOther Questions

      • Who is prone to vitiligo?

        Vitiligo can affect anyone, regardless of gender, age, or race. Vitiligo prevalence is between 0.76% and 1.11% of the U.S. population, including around 40% of those with the con...

      • Is there a link between vitiligo and depression?

        Depression and anxiety are often linked with vitiligo, significantly impacting a person’s quality of life. The connection stems from the shared origin of the skin and brain duri...

      • How can I cure vitiligo?

        Currently, there is no cure for vitiligo. However, many treatments can help manage the condition by restoring skin pigmentation, halting the progression of depigmentation, and i...