How AI Integration Used by Law Enforcement Fails the Public
by Jo Ellen Nott
A recent incident in Utah highlights a growing and problematic trend in law enforcement: the adoption of artificial intelligence to automate essential administrative duties.
In Heber City, the police department recently faced scrutiny after its report-writing software, Axon’s “Draft One,” produced a narrative claiming a responding officer had transformed into a frog. While the department dismissed the “hallucination” as a byproduct of a movie playing in the background, the incident points to a more profound reality, that AI in policing serves administrative convenience rather than public safety.
The primary argument for AI integration, as voiced by Heber City Sergeant Rick Keel, is timesaving. Officers report saving six to eight hours a week by allowing algorithms to summarize body-camera footage. However, the pursuit of efficiency for its own sake does not translate to safer streets. When police departments prioritize “user-friendly” shortcuts over the rigorous documentation of evidence, the integrity of the legal process is compromised. The potential for AI to insert “silly sentences” or factually incorrect data known as “hallucinations,” which officers may fail to catch in a rush to file, creates a high risk of false accusations and wrongful arrests.
The claim that “giving cops more free time” naturally improves safety is an unproven assumption. Critics argue that time saved on paperwork is often redirected toward increased surveillance or low-level stops that do not address violent crime. Furthermore, the lack of human oversight in AI-generated reports introduces a layer of “plausible deniability” for officer misconduct. If a report is found to be inaccurate or biased, the blame can be shifted to the algorithm, shielding the officer from accountability.
Research beyond the Utah incident supports the conclusion that AI does not inherently improve safety. According to a study published in the New York University Law Review in 2019, titled “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” predictive policing and AI-driven tools often rely on “dirty data” or historical records shaped by systemic biases. When AI is used to identify “hot spots” or automate reports, it tends to reinforce existing prejudices rather than discovering new threats.
A report from the Electronic Frontier Foundation notes that AI tools like facial recognition have a documented history of wrongly identifying individuals, particularly people of color. In the context of criminal law, these technological failures lead to constitutional violations such as allowing automated systems to justify searches in a weakening of the Fourth Amendment. The failure of AI tools additionally creates legal instability: the likelihood of cases being dismissed due to unreliable, machine-authored sworn statements increases. The points of failure in AI that are perhaps the most concerning for the average hard-working American are those that cause millions of dollars to be funneled into software that requires constant human correction to prevent hallucinations.
The legal system has an ever-growing list of failed cases in which AI was used to the detriment of the plaintiff. In Nevada County, California, a prosecutor used AI to draft a motion in the drug case of People v. Kalen Turner. The filing was withdrawn in November 2025 after the court discovered it contained hallucinated citations to nonexistent legal cases.
In Palm Beach County, Florida, defense attorneys Scott Skier and Nellie L. King are aggressively using the lack of “audit trails” (records of what the AI wrote vs. what the cop wrote) to challenge the credibility of officer testimony.
AI-generated “hallucinations” have compromised at least 13 Pennsylvania cases. Notable fallout includes a $1,000 fine for the pro se plaintiff and dismissal of a sex discrimination suit. Recently, veteran attorneys faced judicial scrutiny in the Commonwealth Court after submitting a brief riddled with nonexistent citations, threatening the integrity of established legal precedent and professional credibility.
In a ground-breaking move, the Prosecuting Attorney’s Office in King County, Washington, became the first in the U.S. in September 2024 to refuse AI-assisted police report narratives, citing risks of hallucinations and other inaccuracies, the absence of a reliable audit trail showing what the AI generated versus what the officer wrote or edited, and related compliance concerns.
The bottom line is that the militarization and automation of local law enforcement through AI represent a shift toward a “quantity over quality” model of policing. By prioritizing the speed of documentation over the accuracy of the facts, departments are not making the public safer. They are simply making it easier to prosecute or persecute citizens through a flawed and increasingly automated legal system.
Sources: California County News, Electronic Frontier Foundation, New York University Law Review, Spotlight PA, TechDirt.
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login





