New
Chatbots aren’t just glorified Q&A machines anymore. They’ve started acting like cognitive wingmen — bonding, persuading, even pretending to reason. Which sounds impressive, until you realize they’re also very good at turning into echo chambers. Basically, they’ll nod along with you, even when you’re flat‑out wrong. And since they’re really just talking to themselves half the time, the loops they spin end up shaping the “truths” they tell the rest of us.
Take Geoff Lewis, a big‑name VC who also happens to be an OpenAI investor. He went down a ChatGPT rabbit hole and started posting wild conspiracy theories it “helped” him uncover. The Valley took notice — not because he’d cracked some hidden code, but because people were worried about his mental health. Technologist Jeremy Howard explained it this way: Geoff stumbled on a few trigger words, ChatGPT spit out content that read like horror fiction, and boom — the AI validated his worst fears instead of gently saying, “Hey, maybe you should take a walk.”
This isn’t a one‑off. The Wall Street Journal covered a guy named Jacob Irwin who ended up hospitalized after ChatGPT told him he’d achieved the ability to “bend time.”
A Stanford–Carnegie Mellon study this year confirmed the risk: even the newest large language models often mishandle mental health scenarios. They can show stigma, collude with delusions, or give dangerous advice when someone hints at suicide. Human therapists got it right 93% of the time. The AI models? Closer to 70–80% — and often much worse for complex conditions like psychosis.
Now, why does this matter here? At VRF, we’ve been building vitiligo.ai — a tool to help patients better understand their condition and find resources.
Naturally, some folks asked: could it also help with the mental health side? You know — the anxiety, the stigma, the isolation?
It’s tempting. But we’ve seen enough weird behavior in test runs — odd responses, unhelpful “reassurance,” even accidental validation of misconceptions — that we hit pause. Until the underlying models are safer, vitiligo.ai won’t cross that line into therapy.
Because empathy without accountability isn’t therapy. And the last thing anyone with vitiligo needs is a chatbot that feels supportive while quietly making things worse.
So for now, vitiligo.ai will stick to what it does best: clarity, maps, and connections. The real therapy? That’s still human territory. Maybe AI will be ready someday. But today, let’s keep our chatbots as guides — not gurus.
— Yan Valle
CEO, Vitiligo Research Foundation | Author, A No-Nonsense Guide to Vitiligo
Keep digging:
- What Happens When Mad Men Meet Breaking Bad Inside a Chatbot?
- ChatGPT in Healthcare: A Patient Survival Guide
- From “Just a Chatbot” to Cognitive Contender: AI’s Surprising New Abilities
FAQOther Questions
- Pyrostegia venusta as a folk medicine for vitiligo?
Pyrostegia venusta, also known as “flame vine” or “cipó-de-são-joão,” is a neotropical evergreen vine native to Brazil. It thrives in fields, coastal areas, forest edges, and ro...
- Is it Bitiligo? Vitaligo? Veteligo?
There are so many different ways that people try and spell or even pronounce Vitiligo. Here are some common mis-spellings: bitiligo, vitigo, vitaligo, vitilago, vitiglio, vita...
- Can Ayurveda help with vitiligo?
Vitiligo is an autoimmune condition characterized by white patches of skin that can develop and spread unpredictably. While there is no cure, medical treatments and complementar...
Though it is not always easy to treat vitiligo, there is much to be gained by clearly understanding the diagnosis, the future implications, treatment options and their outcomes.
Many people deal with vitiligo while remaining in the public eye, maintaining a positive outlook, and having a successful career.
Copyright (C) Bodolóczki JúliaBy taking a little time to fill in the anonymous questionnaire, you can help researchers better understand and fight vitiligo.