Skip navigation
CLN bookstore
× You have no more free articles available this month. Subscribe today.

Police AI and “Sycophancy”: New Evidence Tools May Tell Cops Exactly What They Want to Hear

by Michael Dean Thompson

Every day new digital tools generate massive new piles of data for law enforcement to winnow for grains of truth. With each new tool comes a new need for expertise to know when the proffered grain is good data or virtual chaff. The challenge springs from facial recognition tools and body cam recordings to automated license plate readers and cell tower dumps. Other sources might include feeds from data brokers that contain location data and keyword searches. A new line of tools makes use of generative AI, much like that used in ChatGPT and Gemini, to digest the myriad inputs in order to provide investigative analyses and leads.

Generative AI tools such as Tranquility’s TimePilot are able to receive data from body cams and automated license plate readers like those of Axon, Ring doorbell cameras, Cellebrite’s cellphone extractions, Cash App, Venmo, and prison phone calls. TimePilot can also digest Facebook, Instagram, and TikTok data. The makers of the tool also claim it can read 120 languages, including idiomatic slang. While fascinating in their power and breadth of skills, it remains unclear how these tools that are difficult to assess for accuracy fit into a realm where an individual’s liberty is at stake.

One glaring concern is that these tools may not divulge exculpatory evidence. Advocacy groups warn specifically of the risk of “sycophancy” – instances where the technology simply tells a user what it thinks they want to hear. Consequently, the tools will answer the questions provided and likely ignore counterfactual data, reinforcing confirmation bias rather than checking it. And, there is no way to know that has happened. Jumana Musa, director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers, told The Record, “AI is not trained to be a prosecutor; it is trained to look for particular things and put them together.”

On that question, the toolmakers have largely punted the responsibility back to the users. Patrick Robinson, CEO of Allometric, told The Record in an email, “Attorneys remain ultimately responsible for identifying and disclosing exculpatory evidence and complying with their discovery obligations.” Similarly, Tejas Shastry, Truleo’s cofounder and Chief Technical Officer, noted that “it is up to the investigator how to use those summaries to further the case.” What would count as a Brady violation if the AI ignored exculpatory evidence and failed to present it to the investigator?

TimePilot and its brethren have found an eager market that continued to expand through the end of 2025. Ian Adams, a Criminal Justice professor at the University of South Carolina, observed, “This is a new category of AI products that I see a lot of development, commercialization, and promise in.” Indeed, in October 2025, these technologies were a focal point at the International Association of Chiefs of Police conference, where companies like Tranquility and Carahsoft showcased their solutions to a wider audience. By December, reports from NewsNation highlighted that police departments were increasingly turning to these tools to manage data overload.

Adams has led independent research in the use of AI by law enforcement, including evaluating a product from Truleo. Despite the allure of an all-knowing AI providing the answers, due diligence still falls on the investigator. Adams added, “The ‘savings’ the vendor promises never fully materialize, because you can’t safely shortcut the due diligence.”

Civil rights experts are right to be concerned about how an AI is trained and what that will mean for investigations. Training is often engulfed in proprietary trade secrets. So, while corporations may give lip service to recognizing the need for training, they do little else. Tranquility’s website both acknowledges and avoids the issue by saying that AI companies “must be able to explain how AI models work, what data they use, and what limits they carry.”

The promise of low cost, vast reach, and expert analysis will push investigators to incorporate the tools with little regard for the bigger picture. However, the legislative landscape began to shift in 2025. While courts have been slow to limit these unproven technologies in the courtroom, state legislators took action, with Utah enacting a law in March and California following in October to establish the first guardrails on law enforcement’s use of AI. Andrew Guthrie Ferguson, a law professor at George Washington University, noted that AI “shifts primary responsibility from a democratically-authorized and licensed lawyer to an undemocratic and unlicensed algorithm.”  

 

Sources: The Record, Smart Cities Dive, Tranquility AI

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

Federal Prison Handbook - Side
CLN Subscribe Now Ad
Stop Prison Profiteering Campaign Ad 2