
Heber City’s police department decided to pilot an AI report-writing tool that transcribes body‑camera recordings into draft police narratives. According to local news, everything was going smoothly — until one report said that an officer “morphed into a frog” in the middle of their duties. No, the officer wasn’t reenacting a fairy tale; the AI had been listening to The Princess and the Frog playing in the background and concluded that the movie's dialogue was part of the event.
The glitch was quickly spotted by human reviewers. The department clarified that it employs no amphibious officers and that the error came from the AI’s inability to distinguish between background media and real-world actions.
Internet users couldn’t get enough of the “frog cop” story. Memes circulated of officers leaping onto lily pads, and some commenters joked that the AI had given new meaning to “undercover” policing. Yet police officials treated the incident as an instructive example: the AI tool is a time‑saver, but it can also hallucinate when background noise confuses its algorithms. A sergeant noted that the system saves him six to eight hours of paperwork per week, but emphasized that officers must read and edit every report before submission.
Hallucinations happen. AI text generators can easily insert fictional elements when they misinterpret audio or context. Without a human check, such hallucinations could have serious consequences in legal documents or official records. Human oversight remains essential. The “frog cop” is a humorous case, but similar errors in more serious contexts (e.g., arrest narratives or medical summaries) could lead to lawsuits or harm. Transparency builds trust. The department’s willingness to share the glitch publicly demonstrates a commitment to transparency and helps build public confidence in the technology’s rollout. By addressing mistakes openly, agencies can refine AI tools and set realistic expectations.
While some readers propose scrapping AI from police work entirely, Heber City plans to keep testing the software with strict review processes. The department’s experience underscores the importance of properly training AI systems, filtering out irrelevant audio, and ensuring that human professionals retain final responsibility for official reports. In other words, AI can help lighten the paperwork load — but it can’t replace the judgment of a human officer (or save them from turning into a frog).