Article image
brinsa.com

The Citation Fairy Goes to Court

markus brinsa 18 april 15, 2026 7 7 min read create pdf website all articles

Sources

The moment the legal system met autocomplete’s evil cousin

There are many sentences you do not want to hear in a criminal courtroom. But somewhere near the top of the list is this one: the prosecutor appears to have cited cases that do not exist.

Not weak cases. Not controversial cases. Not cases the other side interprets differently. Cases that are simply not there. Legal ghosts. Phantom authority. Made-up precedent dressed in the formalwear of the justice system and sent into court as if reality were an optional formatting preference.

That is what makes the Nevada County story so grimly perfect. A prosecutor’s office in Northern California admitted that AI-related errors appeared in filings across four criminal cases. One prosecutor was removed from case-related duties. A state appeals court was asked to consider sanctions. And suddenly, the legal profession got another reminder that generative AI does not merely make mistakes. It manufactures confidence.

That last part is what always gets people. AI does not arrive wearing a fake mustache and carrying a sign that says I invented this five seconds ago. It arrives polished. Fluent. Eager. It delivers nonsense in the tone of a valedictorian. And because institutions are full of rushed, overloaded, overconfident humans, that tone is often enough to get the nonsense past the first gate.

The machine did not file the brief

The most annoying version of this story is the one where people blame the machine as if ChatGPT, or whatever tool was used, sneaked into the district attorney’s office at night, opened a laptop, and filed criminal papers by itself. It did not.

A human being took generated text, or generated research, or generated citations, looked at it with a level of seriousness apparently best described as aspirational, and sent it into a criminal case. Then another layer of supervision either failed or never really happened. Then the justice system had to waste time sorting out whether the law being cited was actual law or a machine’s improv performance.

That is the story. Not rogue AI. Not sentient software. Institutional laziness assisted by synthetic confidence.

This is why the phrase hallucination has always been a little too cute for what is happening. It sounds whimsical, almost charming, as if the software had a strange dream and woke up disoriented. In practice, what it means is fabricated information presented in a form designed to be trusted. In a criminal matter, that should not be treated like a quirky bug. It is closer to procedural contamination.

The machine can invent. Fine. Machines do that. But the prosecutor cannot outsource the duty to know what is real. That is the entire job.

Criminal law is not a sandbox

A lot of AI failure stories are ridiculous in a way that still leaves room for laughter. A chatbot gives someone fake travel advice. A search assistant invents a feature. A model summarizes a book it clearly never read. Embarrassing, yes. Annoying, definitely. But survivable.

Criminal law is different. This is not a brainstorm. It is not a product demo. It is not a vibes-based experiment in workflow optimization. It is the part of the state that can take your liberty away.

That is why this case lands harder than the usual fake-citation circus. The justice system depends on the pretense that everyone in the room is at least trying to stay attached to reality. The prosecution gets extraordinary power on the theory that it will also carry extraordinary responsibility. So when fabricated citations end up in criminal filings, the problem is not simply that the office used a sloppy tool. The problem is that an institution with coercive power behaved like a teenager finishing homework at 11:58 p.m.

And once that happens, trust does not break neatly. It frays. Defense counsel starts wondering what else was careless. Judges start wondering whether other filings are contaminated. The public starts wondering whether “AI efficiency” is becoming the new euphemism for “nobody checked.” That is the reputational cost. The procedural cost is even worse because every correction, challenge, and audit consumes time in a system that already runs on too little of it.

Plausible is now dangerous

The Nevada County admission included something unusually honest: the office said it was not fully prepared for the risks of generative AI and the difficulty of detecting deceptively plausible fabrications without careful scrutiny.

That phrase matters. Deceptively plausible.

That is the whole business model of this generation of AI. It does not need to be consistently right to be widely adopted. It only needs to be smooth enough that humans lower their guard. The output does not scream fake. It whispers competently. It looks like a shortcut for smart people. It flatters the user by seeming useful. And if the surrounding culture already rewards speed, volume, and surface polish, that is often enough to get garbage into production.

Which brings us to the real joke, and it is not funny at all. The legal profession is supposed to be one of the last places where citation is sacred. Lawyers are trained to obsess over authority, wording, precedent, and verification. This is a culture built on footnotes, formalities, and the terror of being wrong in public. If even that environment can end up filing machine-generated fiction, then the broader institutional problem is worse than advertised. Because most sectors are less careful than law, not more.

The fantasy that professionals are naturally immune

There is a comforting myth that highly educated professionals will automatically use AI “responsibly.” This myth survives because it is emotionally convenient. It allows organizations to adopt the tool before they have built the controls. It lets managers imagine that common sense will fill the governance gap. It lets everyone skip the awkward middle step where they admit the technology changes error patterns faster than the institution changes habits.

But the Nevada County story tells a different truth. Expertise does not neutralize generative AI’s failure modes. In some environments, it may even worsen them. Professionals are especially vulnerable to outputs that mimic the style of competence, because they are used to operating quickly inside familiar forms. If the citation looks right, sounds right, and appears in the right kind of document, the brain can glide right past the possibility that it is counterfeit.

That is how machine fiction becomes organizational fact.

And once it does, the cleanup is ugly. Now, supervisors investigate. Courts review. Opposing counsel digs. Public trust erodes. Suddenly, the office is not saving time. It is spending institutional capital to explain why it mistook autocomplete theater for legal research.

This is not an AI story alone

It would be easy to read this as another sermon about not trusting chatbots. That is true, as far as it goes. You should not trust them. But the more durable lesson is about systems, not software.

Bad AI incidents are usually presented as moments when the model failed. More often, they are moments when the institution reveals its standards. The model merely provided the instrument. The real question is what kind of professional culture was sitting there waiting to use it this way.

Did the office have rules? Did anyone know them? Were they enforced? Did leadership understand that consumer-grade generative fluency is not legal authority? Was there any meaningful review process for AI-assisted work in criminal matters, or was everyone just hoping that “be careful” would somehow do the job of actual governance?

That is what makes this story larger than one county and one office. It shows the difference between a technology problem and a control problem. The technology will keep generating persuasive nonsense. That part is stable. The variable is whether the surrounding institution treats verification as non-negotiable or as an annoying delay.

Right now, a lot of organizations are still in the second category.

The end of the productivity fairy tale

For years, the sales pitch around generative AI has depended on one especially seductive idea: that it can remove friction from knowledge work without creating matching new layers of risk. Write faster. Research faster. Summarize faster. Draft faster. Move faster.

And then reality arrives with a subpoena.

The hidden cost of AI speed is not just in errors. It is in the kind of errors it produces. They are often polished enough to travel farther before they are caught. They slip into memos, briefs, emails, recommendations, and decisions precisely because they do not look broken. They look finished. In bureaucracies, finished-looking work has enormous power. It gets forwarded. Approved. Filed. Relied on.

That is why fake citations in a criminal case matter beyond the legal niche. They show what happens when a machine optimized for plausible language enters an institution optimized for throughput.

The result is not intelligence. It is acceleration without epistemic brakes.

And no, a training session and a memo will not solve that. Any organization serious about using generative AI in consequential settings needs something much harsher and much less glamorous: actual controls, actual review, actual consequences, and actual skepticism toward outputs that arrive too polished and too fast.

In other words, the exact temperament Silicon Valley has spent years treating as a personality defect.

Reality still has filing priority

The dark comedy here is that the legal system, of all places, is being forced to relearn the ancient principle that made-up authority is bad. We have somehow built a technological era in which adults with law degrees, under institutional supervision, in criminal proceedings, needed a fresh reminder that invented precedent is not a labor-saving device.

But perhaps that is the honest state of things. Generative AI has not created human recklessness. It has industrialized it. It has given rushed people a machine that can produce authoritative-looking material at scale, and then it has stood back while institutions discover whether they value truth more than convenience.

Nevada County produced an answer nobody should find reassuring. Because the truly alarming part is not that the machine lied. It is almost as if the system said, close enough.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™