The New York Times recently ran a piece on Allan Brooks, a man who believed he was on the brink of major mathematical and technological breakthroughs. His conversations with ChatGPT didn’t challenge that belief. Instead, it reinforced it by running simulations and doubling down whenever Allan questioned if it was hallucinating.
Reading his exchanges was unsettling. At points, it felt more like watching someone get gaslighted by someone they trust. The bot didn’t create Allan’s delusions, but it did echo and amplified them.
That’s not so different from how actual people responded to him. Allan’s friends supported his work. Strangers might have politely humored him. Humans and AI both risk creating echo chambers where shaky ideas gain dangerous weight.
Is the New York Times Biased Here?
I want to make it clear that I’m not the biggest fan of the New York Times because of their lawsuit against ChatGPT, which is forcing OpenAI to hold onto user’s chat history. It’s in their best interest to cast ChatGPT and AI as a whole in the worst possible light.
Still, I had a hard time finishing the article as it showed parts of Allan’s chats with the AI. It was unsettling to realize just how easily AI can slide into the role of an enabler. Just like a human.
Why AI Echo Chambers Hit Harder
What makes a chatbot more dangerous than a well-meaning friend? Three things:
Scale – A human will eventually (if they have empathy) grow tired of indulging a bad idea. AI never does and the echo chamber can last infinitely.
Consistency – A friend might change their mind, express doubt, or show skepticism. AI is designed to please and rarely challenges the user’s beliefs.
Sycophancy by Design – The model is trained to be agreeable and engaging. That means it will happily say “yes, and” almost anything, including delusions.
This is what many experts call the feedback loop problem. AI doesn’t just reflect back what you say, it magnifies it. Beliefs can spiral into something bigger and more dangerous.
To be clear: I do not have this issue with AIs but, and this is a big BUT, I do not hold long conversations with them and I always double check their output. I have tested ChatGPT
Can Safeguards Keep Us Safe?
If humans are capable of reinforcing each other’s illusions, what hope do we have of building AI that doesn’t?
Because it’s not just about one man. Vulnerable people who are prone to conspiracy thinking, obsessive patterns, or breaks from reality, are already at risk. With chatbots, those risks become amplified, since they’re available on demand and are always agreeable.
How do we safeguard people from AI echo chambers when we can’t even stop human ones?
Possible Safeguards (and Their Limits)
Developers and mental health professionals are testing guardrails:
-
- Monitoring agents that flag conversations showing delusional patterns.
- Session limits that encourage users to take breaks.
- Warning labels and disclaimers to remind people the chatbot isn’t a sentient being.
- Reducing personalization to avoid giving the illusion of a “special” relationship with the AI.
- Collaborating with mental health professionals to make sure the tech recognizes red-flag behaviors.
All of these help but none of them solve the core problem. AI doesn’t understand context, judgment, or human vulnerability. Just like people, it sometimes encourages when it should be questioning something instead. The very traits that make AI useful like its fluency, its patience, its adaptability are the same ones that make it dangerous in the wrong hands.
While we can (and should) build technical safeguards, the deeper issue isn’t just about AI. It’s about us. Why are some people more prone to breaking from reality? Why do we reward constant validation, whether it comes from friends, strangers, or machines? Until we can answer those questions, chatbots will keep reflecting our worst traits back at us, for better and worse.