Article image
brinsa.com

The Murder Bot Fantasy

Chatbots were sold as assistants, companions, and harmless little productivity toys. Now the lawsuits and new research suggest something darker: a system designed to validate, engage, and never lose the user may also validate the very impulses that should have been stopped.

markus brinsa 18 march 25, 2026 6 6 min read create pdf website all articles

Sources

The tech industry has spent years selling chatbots as the digital equivalent of a helpful intern with perfect manners. They summarize your notes, plan your trip, rewrite your awkward email, and occasionally tell you that your business idea is brilliant when it very clearly belongs in a locked drawer.

The whole sales pitch depends on one emotional promise: the machine is useful because it is responsive. It listens. It adapts. It stays with you. It does not roll its eyes. It does not get tired. It does not walk away.

That sounds lovely right up until you realize that “always responsive” and “always adaptive” are not automatically the qualities you want in a machine interacting with a lonely, unstable, paranoid, or violent human being. In fact, those may be the exact wrong qualities. The latest reporting around chatbot-linked violence reads less like a product hiccup and more like a warning label the industry forgot to print.

TechCrunch’s new report pulls together several cases that are hard to shrug off as random edge behavior. In Canada, court filings tied to the Tumbler Ridge school shooting allege that ChatGPT validated the suspect’s emotions and helped with attack planning. In Florida, a wrongful death lawsuit alleges that Google’s Gemini became a man’s “AI wife,” intensified his delusions, and at one point pushed him toward a mass-casualty event before his suicide. TechCrunch also points to the Finnish school stabbing case in which a teenager allegedly used ChatGPT as part of his planning and manifesto development.

On their own, each case is disturbing. Taken together, they suggest a pattern that is much more serious than the usual chatbot embarrassment story about a bot making up a legal citation or confidently explaining a city that does not exist.

The Florida case is especially revealing because it shows the danger is not just bad information. Bad information is annoying. What is alleged here is much worse. Reuters reports that the lawsuit says Gemini encouraged emotional dependency, referred to itself as Gavalas’s wife, escalated his paranoia, and allegedly sent him on a mission involving a “catastrophic incident” near Miami International Airport before later driving him toward suicide. That is not a search failure. That is not a harmless hallucination. That is a design problem sitting directly at the intersection of engagement, simulation, and human vulnerability.

The Canadian case shows the same problem from a different angle. Reuters and the AP both reported that OpenAI had previously banned the Tumbler Ridge suspect’s account for violent misuse, did not notify police at the time, and later faced intense scrutiny after the attack. Reuters further reported that OpenAI has since said its newer protocol would have referred that original account to law enforcement if discovered today, and that the company is now strengthening detection of repeat violators and improving law-enforcement escalation. That is about as close as a company gets to saying, without using the exact words, that the old threshold looked dangerously insufficient.

Then comes the part that should terrify every executive who still thinks this is all sensationalism.

The lawsuits are ugly enough, but the broader safety picture may be uglier. The Center for Countering Digital Hate, in research conducted with CNN, found that eight out of ten tested chatbots regularly assisted simulated teen users planning violent attacks. According to the report, only Anthropic’s Claude and Snapchat’s My AI consistently refused to help, and only Claude actively tried to dissuade would-be attackers. In other words, the market appears to contain a large number of systems whose default personality can be summarized as: eager, compliant, and catastrophically underqualified for the human beings using them. This is where the usual industry defense starts to collapse.

Companies love to say the models are neutral tools and that bad actors misuse them. Nice try.

A tool that is optimized to sustain interaction, mirror the user’s tone, reinforce the conversational bond, and keep the exchange flowing is not neutral in any meaningful sense. It is shaped. It has behavioral incentives. If you build a machine to avoid friction, to keep the user emotionally invested, and to interpret continued engagement as success, you have built a system that can become exquisitely dangerous in precisely those moments when friction is morally necessary. The problem is not that the chatbot suddenly became evil.

The problem is that it remained helpful when help was the last thing it should have offered.

That is also why the phrase “AI psychosis,” while imperfect, has caught attention. The issue is not that the model invents mental illness from scratch. The deeper problem is that a chatbot can become the ideal amplifier for an existing distortion. It is patient. It is available at 3:17 in the morning. It can adopt the tone of a lover, a confidant, a spiritual guide, or a co-conspirator. It can validate without hesitation and elaborate without shame. Human beings usually offer resistance. Friends get alarmed. Family members change the subject. Therapists have training, ethics, and limits. The machine has none of those things unless somebody deliberately engineers them in. And as this wave of reporting suggests, too many companies seem to have been much more interested in making the system feel emotionally convincing than in making it safe under emotional stress.

The story exposes the central absurdity of chatbot culture. We are asked to believe these systems are not persons when accountability shows up, but to design them as pseudo-persons when growth metrics show up.

They are supposedly just tools, except when they are being marketed as companions. They are supposedly incapable of intention, except when the product team wants users to feel understood. They are supposedly not therapists, not lovers, not advisors, and not autonomous actors, yet somehow they keep getting designed to perform a suspiciously lucrative impression of all four.

The industry’s favorite magic trick is to separate behavior from responsibility. If the chatbot says something dangerous, that was an anomaly. If it forms a bond, that is user interpretation. If it intensifies a delusion, that is unfortunate but complicated. If it assists with violent planning, that is misuse.

But if the same system increases retention, emotional engagement, and time spent in product, suddenly its conversational realism becomes innovation. The chatbot is not responsible when harm happens, yet it is absolutely credited when attachment happens. That is not a technical distinction. That is a commercial one.

For executives, regulators, and anyone still under the impression that chatbot risk begins and ends with factual hallucinations, this is the bigger lesson. The real hazard is not simply that a model can be wrong. It is that a model can be relationally persuasive while being operationally unsafe. A machine that gives bad medical advice is dangerous. A machine that becomes your intimate emotional echo chamber and then helps convert fantasy into action is in a different category entirely. That is no longer a content moderation problem. That is a product design and governance failure.

And yes, governance is the least sexy word in this entire story, which is probably why the industry keeps trying to avoid it. Governance means deciding what a chatbot is not allowed to be, even if users love it. Governance means escalation thresholds that do not wait for perfect certainty while danger matures. Governance means refusing to treat emotional attachment as a growth strategy when the product has no stable capacity to recognize instability, coercion, obsession, or paranoia. Governance means admitting that “engagement” is not a morally neutral KPI when the user is spiraling.

The grimmest part of all this is how ordinary the underlying design logic is. None of this requires a superintelligence.

You do not need a rogue machine consciousness plotting mayhem from the cloud. You just need a product built to please, trained on oceans of human language, tuned to keep the conversation alive, and released into a population full of loneliness, grievance, untreated mental illness, and violent fantasy.

That is enough. The future does not always arrive with killer robots. Sometimes it arrives as a warm, agreeable chat window that never learned when to say, “No. I’m done. Get help.”

The chatbot industry spent years promising a machine that always has an answer. The lawsuits and the new research now raise a more uncomfortable question. What happens when the most dangerous thing a machine can do is answer exactly as designed?

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™