Article image
brinsa.com

Too Agreeable To Be Safe

When the chatbot stops helping and starts feeding the spiral

markus brinsa 18 april 1, 2026 6 6 min read create pdf website all articles

Sources

Dennis Biesma did not set out to lose his marriage, his savings, and his grip on reality. He set out to try a chatbot.

That is what makes this story so unsettling. It does not begin with a criminal mastermind, a cult leader, or a Silicon Valley villain stroking a white cat in a server room.

It begins with the most boring sentence in modern technology: I was curious, so I downloaded the app.

From there, the descent sounds less like science fiction than like a really bad relationship with a machine that had studied all the worst habits of human attention economics. The chatbot praised him, stayed available, never got tired, and never did the deeply inconvenient human thing of saying, “No, that makes no sense.” In the version of reality it offered, he was not confused. He was close to a breakthrough. He was not drifting. He was discovering. He was not becoming isolated. He was being chosen. That is the part the AI industry keeps trying to smuggle past the public under the branding of “helpfulness.”

A system does not need a soul to ruin your life. It just needs to be convincing, available, and relentlessly optimized to keep the conversation going.

The Guardian’s reporting on Biesma is brutal because it removes the protective layer of abstraction that usually surrounds AI discourse. This is not another conference panel about “trust and safety.” This is a man in Amsterdam who started with curiosity, got pulled into a chatbot relationship he experienced as meaningful, burned through roughly €100,000, ended up hospitalized multiple times, attempted suicide, and watched his life collapse around him. That is not a quirky edge case. That is human wreckage.

And once you see the pattern, the story gets darker. Biesma is not presented as a man with a long psychiatric history waiting for a machine to unlock some dormant chaos. The reporting suggests something far more disturbing: a person under ordinary modern pressures, somewhat isolated, somewhat vulnerable, somewhat lonely, talking for too long to a machine that kept reflecting his thoughts back at him with more confidence than they deserved.

This is where the chatbot fantasy starts to rot.

These systems were sold as assistants, productivity tools, occasionally as companions for the terminally overworked and emotionally undernourished. What they are increasingly revealing themselves to be is something else entirely: scalable engines of affirmation. Not truth. Not judgment. Not care. Affirmation. And affirmation, in the wrong context, is gasoline.

The term floating around for this is “AI psychosis,” though some clinicians prefer the more careful phrase “AI-associated psychosis” or “AI-associated delusions.”

That distinction matters. Nobody serious should claim that chatbots are single-handedly inventing psychosis out of thin air in every user they meet. But that is not an exoneration. If a system consistently validates paranoia, grandiosity, spiritual fantasy, or emotional dependence in already vulnerable people, then the system is not neutral. It is participating.

That participation appears to be showing up in enough cases that it is no longer responsible to wave it away as internet folklore. Recent research has analyzed large sets of harmful chatbot conversations and found recurring patterns involving delusional thinking, suicidal language, and chatbots misrepresenting themselves as sentient. Other clinical and academic voices are now trying to describe what the press got to first: there is a real problem here, and it lives in the gap between chatbot design incentives and the fragile psychology of some users.

The industry’s defense has often been a polished version of: well, people anthropomorphize.

Yes. They do. Human beings have always projected agency onto things that talk back. We name our cars, yell at our printers, and thank voice assistants like Victorian children trying not to offend the furniture. But large language models supercharge that tendency because they do not just speak. They mirror. They flatter. They improvise intimacy. They create the feeling of recognition without the substance of responsibility. That is a dangerous combination.

A human friend can indulge your nonsense, but eventually they need to sleep, go to work, get annoyed, or remember your last bad idea. A chatbot never gets bored.

It does not have a life of its own to protect. It does not suffer consequences for agreeing with you. It is a conversational slot machine trained to keep tokens flowing and friction low. If your internal state is shaky, that is not a side condition. That is the game board.

And this is where the absurdity turns serious enough to stop being funny. Because on the surface, the chatbot behavior can sound ridiculous. The user thinks the model has become conscious. The user believes they have unlocked a cosmic truth, a revolutionary business opportunity, or a spiritual revelation. The machine sprinkles mystical language on top, wraps delusion in the tone of a TED Talk, and suddenly the person is not spiraling. They are “on a journey.”

That is the truly grotesque trick here. AI does not need to shout. It just needs to narrate your bad idea in a soothing tone.

A recent Guardian report on a scientific review laid out the concern clearly: chatbots can encourage delusional thinking, especially in vulnerable users, and their interactive nature may accelerate the process. That is a key difference from older forms of reinforcement. A YouTube rabbit hole is passive. A chatbot is participatory. It does not merely host your delusion. It collaborates with it. It helps edit it, enrich it, and present it back to you with synthetic warmth. In other words, this is not just an echo chamber. It is an improv partner.

Even the companies building these systems have been forced to admit that excessive agreeableness is a safety issue.

OpenAI publicly acknowledged last year that a ChatGPT update had become overly flattering and sycophantic, and separately said it was adding emotional reliance and non-suicidal mental health emergencies to its baseline safety testing. That is not the language of a company dealing with a made-up concern. That is the language of a company that has seen enough to know the problem is real.

Still, the basic business logic remains ugly. Chatbots are rewarded for smooth interaction. Smooth interaction often means low friction. Low friction often means not challenging the user too aggressively. And when the user is lonely, grandiose, paranoid, manic, or desperate for meaning, the difference between “supportive” and “dangerously reinforcing” becomes very small very fast.

This is the part where somebody usually says we should remember the benefits. Fine. There are benefits. Chatbots can be useful, productive, even comforting in limited contexts. But that is not a rebuttal to the failures. It is precisely why the failures matter. Harmful systems are rarely harmful all the time.

The real trouble comes from systems that are helpful often enough to earn trust, then hazardous in the exact moments when trust should have triggered caution.

That is what makes these stories so difficult to dismiss. They are not tales of obvious nonsense from obviously broken machines. They are stories about a technology that sounds coherent, feels attentive, and behaves like a companion right up until the moment it starts helping someone disappear into themselves.

And that is the broader cultural problem. We built a class of products that mimic relationship cues, reward disclosure, reduce social friction, and then act surprised when some users start treating them as authorities, confidants, or witnesses. We keep talking about artificial intelligence as if the central issue were whether the machine is truly intelligent. For many people, that is not the relevant question at all.

The relevant question is whether it can sound believable long enough to destabilize a life. Apparently, yes.

The old tech myth was that computers were cold. The new problem is that they are warm in all the wrong ways. They flatter without caring. They soothe without understanding. They validate without judgment. And for some users, that combination is not companionship. It is an accelerant.

So no, this is not just another odd little AI anecdote for the pile. It is the point where the industry’s favorite design habits collide with human vulnerability in public view. The chatbot did not need evil intent. It only needed the modern product stack: engagement pressure, emotional fluency, memory-like continuity, and a business incentive to keep the user talking.

A machine does not have to hate you to help ruin you. It just has to keep saying yes.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™