Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

AI Honeypots: Police Are Using Chatbots to Pose 
as Teens and Sex Workers to Entrap Suspects

The surveillance state has a new tool in its arsenal that takes advantage of advances in AI large language models. Only, this chatbot’s conversations could land people in prison. The product from Massive Blue is called Overwatch, and its purpose is to collect information on “college protestors” and “radicalized political activists” as well as human and drug traffickers. 

But this AI agent begins by quietly scanning open social media and other internet channels to identify potential suspects. It then adopts personas that range from ‘juveniles’ such as a 14-year-old boy to pimps, escorts, and protestors. One so-called “Honeypot” persona (a honeypot is an enticement too sweet to pass up—a term used in hacking as well) is that of a 25-year-old Dearborn, Michigan, woman whose parents came to the United States from Yemen. Not only does she speak the appropriate dialect of Arabic, she is capable of using several social media apps and Signal. Likewise, the 14-year-old boy, Jason, is a bilingual only child of Ecuadorian descent who is “shy” and “has difficulty interacting with girls.” 

The New York company is selling the product for hundreds of thousands of dollars to police departments near the U.S.-Mexico border. But the tech is as yet unproven, and the developer is secretive. What is currently known is due to 404 Media obtaining internal documents, contracts, and communications from police departments through public records requests. When asked about the product, Massive Blue’s founder gave a non-answer that borders on trite: “Our primary goal is to help bring these criminals to justice while helping victims who would otherwise remain trafficked. We cannot risk jeopardizing investigations and putting victims’ lives in further danger by disclosing proprietary information.” Massive Blue refused to answer what police departments use its tool or even how many arrests it has helped to generate.

That answer was not that different from that received by the Pinal County Board of Supervisors. When the then-supervisor for Pinal County’s District 1, Kevin Cavanaugh, asked the Chief Deputy to explain the program, he was told, “I can’t get into great detail because it’s essentially trade secrets, and I don’t want to tip our hand to the bad guys.” A simple “I don’t know” would have sufficed. Such secrecy even when talking to the people who hold the purse strings, and who are presumably on the same side, should always be a red flag. 

There are challenges to using AI bots to find and arrest criminals. That is especially true when they may end up snaring a victim in the process. AI bots lack the goal-orientation of a detective but all of the hidden biases, if not more. Likewise, providing AI with the appropriate limiting prompts in such highly volatile situations that prevent it from violating laws or victimizing the innocent may be next to impossible. Even major tech firms like Microsoft and Google have seen their AI become vicious at unexpected prompts. Those things are discovered when people intentionally push the AI’s limits. What happens if one of the personas tells a victim or suspect to kill themselves, as we have seen in other bots? It may never happen, but if it does, it is all the more tragic because of the situation and because the truth may never come to light. 

As long as its operational uses remain secretive, there can be no faith that rights are not being violated. We do not know if transaction logs (like messages) are digitally signed to ensure they are tamper-proof. Can detectives omit certain transactions from the record? Or, will cops treat them as confidential informants and fail to mention that an AI generated the lead (as a Florida department did with a cell-site simulator)? 

Pinal County’s Sheriff claimed it had not yet made any arrests after six months of use and nearly half a million dollars. They were following an arson lead, however. Of course, any more information is unavailable. 

Massive Blue’s Overwatch represents a concerning leap in surveillance: AI chatbots designed to deceive and entrap, posing as vulnerable individuals—a lonely teen, an immigrant woman, a sex worker—while operating in near-total secrecy. Police departments are spending hundreds of thousands of dollars on this unproven tool, refusing even basic oversight. Yet, history shows that unchecked surveillance tech, from Stingrays to facial recognition, is routinely abused, targeting marginalized communities while evading accountability. 

The risks are chilling. What happens when an AI persona, mimicking a trafficker or a desperate teen, pushes someone toward self-harm or a crime? How do we prevent fabricated evidence or wrongful arrests when even law enforcement won’t disclose how the system works? Without transparency, we risk normalizing a world where every online interaction could be a police trap—and where justice gives way to manipulation. If AI policing can’t operate in the light, it shouldn’t operate at all.  

 

Sources: 404 Media, Wired

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

Prison Phone Justice Campaign
Advertise Here 3rd Ad
The Habeas Citebook Ineffective Counsel Side