Skip navigation
The Habeas Citebook: Prosecutorial Misconduct - Header
× You have 2 more free articles available this month. Subscribe today.

Digital Parallel Construction: Detecting and Challenging Hidden AI

by Richard Resch

In the companion to this Column, “When AI Invents the Pixels,” published in the January 2026 issue of CLN, we explored the dangers of prosecutors introducing AI-­enhanced video as substantive evidence at trial. We discussed how some generative upscaling tools can create “hallucinations,” plausible but false details, and how to challenge that evidence under Daubert, Frye, and the trial-­court ruling in State v. Puloka.

But the most dangerous AI “evidence” is not always the video the prosecutor plays for the jury. It can be the digital processing step the prosecutor never discloses to the defense – and the police report never mentions.

Increasingly, law enforcement agencies are deploying AI tools to generate investigative leads. They may use AI enhancement to “read” a license plate that was unreadable; use algorithmic face recognition to generate candidate identities from surveillance images; and use Large Language Models to draft reports in ways that flatten or obscure those digital steps. Once a suspect is identified and physical evidence is found, the AI step disappears from the official narrative.

This is the digital evolution of “parallel construction,” a process in which the government’s official story begins after the critical lead has already been generated elsewhere. The risk is “evidence laundering,” using a nontransparent, potentially unreliable “black box” (a system whose internal processes are hidden or unexplainable) to locate admissible evidence, while concealing the box so the defense cannot meaningfully litigate the Fourth Amendment basis for the intrusion or impeach the investigation’s integrity.

This Column provides a blueprint for detecting hidden algorithmic steps, identifying the “investigative gap,” and litigating suppression and disclosure when AI is doing work the prosecution does not want to admit.

The Mechanism:
Digital Parallel Construction

Parallel construction has long been a dark art in criminal justice. Historically, it could involve agents using classified intelligence to identify a target, then engineering a “clean” stop so the official chain of evidence begins at the traffic stop, not at the undisclosed tip.

Today, the undisclosed “tip” may be an AI system. Consider a common scenario: a robbery is captured on a nighttime CCTV camera. The getaway car is visible, but the license plate is a pixelated smear. To the human eye, it is indecipherable.

In the back office, an investigator runs the footage through consumer software marketed as “AI enhancement.” In Puloka, the court credited testimony that certain AI tools do not merely clarify. They can add large amounts of new pixel data and can “create[] false image detail,” using an enhancement process not reproducible or reviewable by the forensic video community. If an AI tool turns a smear into “ABC-­123,” officers can then run that plate, connect it to a registered owner, and begin traditional investigative steps (drive-­bys, database checks, surveillance, stops, warrants).

However, the police report may not mention the AI step. It may read: “Upon review of surveillance footage and subsequent investigation, officers identified the suspect vehicle.” Or more deceptively: “Officer Smith, relying on training and experience, observed the license plate ABC-­123.”

By the time the defense sees discovery, the case may be built around the “clean” evidence found later (the vehicle, the property, a confession), while the bridge between the crime and the defendant, the identification step, rests on an undisclosed algorithmic inference. If the defense cannot expose that inference, the defense cannot intelligently litigate probable cause, reasonable suspicion, or the credibility of the affiant.

The Puloka Warning:
Why They Try to Hide It

Why would police and prosecutors minimize or omit AI processing steps? Because disclosure can render the identification step vulnerable under ordinary Fourth Amendment and evidentiary principles, especially where the government’s narrative portrays algorithmic output as direct human observation.

In Puloka, the King County Superior Court granted the State’s Frye motion and excluded the defense’s proposed AI-­enhanced video exhibits. Crediting the State’s forensic video expert, the court determined the AI tool “added approximately sixteen times the number of pixels” compared to the source footage and “created false image detail,” using an enhancement method “unknown to and unreviewed by any forensic video expert” and not generally accepted in the forensic video analysis community. The court concluded that using AI tools to enhance video for introduction at a criminal trial is a “novel technique” and that the defense, as proponent, failed to establish general acceptance under Frye. The court further concluded the AI-­enhanced output did not satisfy Evidence Rule 401 (relevance) because it did not show “with integrity what actually happened” but instead used “opaque methods to represent what the AI model ‘thinks’ should be shown” and that Evidence Rule 403 (unfair prejudice vs. probative value) independently required exclusion due to juror confusion and the risk of a time-­consuming trial-­within-­a-­trial over a non-­peer-­reviewable process. The court also accepted testimony that the AI process removed information from the original frames, added information not present in the originals, altered artifacts and shapes, and made proper forensic analysis impossible. 

Separately, Topaz Labs posted on its own community forum in March 2024: “We strongly do not recommend the use of our Video AI technology for any forensic or legal application.”

For suppression litigation, those points matter less as a rhetorical cudgel and more as a practical lever. Once a defense attorney can show that an officer relied on a generative enhancement process to “see” what the original evidence did not contain, the court can meaningfully scrutinize (1) whether the affidavit accurately described the basis of knowledge and (2) whether the inference was sufficiently reliable to support reasonable suspicion or probable cause, particularly if it was the core link from the crime to the defendant.

This does not mean any use of software is constitutionally suspect. It means undisclosed use, especially the use of generative enhancement presented as “I observed,” creates litigation leverage that many cases cannot survive if the identification step was the keystone.

The Three Vectors of Hidden AI

Hidden AI use in criminal investigations typically appears through three distinct channels, each with its own litigation implications. The first involves algorithmic systems that generate investigative leads – most notably, face recognition tools that produce candidate lists from surveillance images. The second involves AI enhancement of degraded media, which can enable identifications that the original footage could never support. The third, and newest, involves generative tools that draft the very reports defense counsel will rely on to understand the investigation. Each vector creates different risks, but all share a common feature: the capacity to perform critical investigative work that is then omitted from the official record.

The “Investigative Lead” Trap:
Face Recognition and
Algorithmic Candidate Lists

Many agencies use face recognition tools (including vendors like Clearview AI and government systems) to generate candidate lists from images. The legal danger is not simply that an algorithm produces a lead; it is that the lead is treated as a reliable identification, especially when the probe image is low quality and when the subsequent warrant paperwork is written as though a human made a confident match.

The case of Nijeer Parks illustrates the risk. In 2019, Parks was arrested and charged with shoplifting, aggravated assault, and other offenses stemming from an incident at a Hampton Inn in Woodbridge, New Jersey, a town Parks says he has never visited. The charges carried potential prison time of up to 10 years. Parks was jailed, and although the charges were eventually dismissed, he spent months fighting a prosecution for a crime he did not commit. The bridge from the crime to Parks was a face recognition search.

Parks later filed a federal civil rights lawsuit, and the ACLU of New Jersey submitted an amicus brief detailing what allegedly went wrong. According to the brief, officers received a lead from face recognition technology and, within a short time, proceeded without undertaking a reliable confirmatory step. The warrant applications that followed were allegedly misleading about what the technology actually did and overstated the reliability of the result (including use of a “high profile comparison” characterization the amicus brief describes as essentially invented). Two practice points follow.

First, treat face recognition as you would treat any tipster, i.e., demand the underlying basis and reliability. A candidate list is not a “match.” At best, it is a ranked set of possibilities produced by a probabilistic system.

Second, be alert to narrative laundering. When reports say “identified through investigative measures” but never explain how the police moved from a blurry image to a named suspect, assume there is an algorithm in the gap until proven otherwise.

“Magical Identification”
in Low-­Quality Video

This is a recurring red flag. Discovery includes low-­resolution video in which the suspect’s face is a blur, but the report claims an officer “immediately recognized” the defendant.

Sometimes officers truly do recognize someone they know. But unless the report establishes a specific, documented basis (prior contacts, distinctive features visible in the original, a contemporaneous recognition), a “magical” recognition in objectively poor imagery can be a tell for undisclosed processing: enhancement, database searching, face recognition candidate review, or a combination of all three, followed by a rewritten narrative to make the identification look human and inevitable.

Axon “Draft One”:
Generative Report Writing

A newer risk comes from generative report drafting tools. Axon markets “Draft One,” which uses generative AI to produce a draft narrative from body-­worn camera audio / transcripts. This is not mere transcription; it is text generation. The legal risk is twofold.

Accuracy: Like other generative systems, such tools can produce fluent narratives that contain mistakes, including misattributed quotes, inferred intent, smoothed sequences that are not actually supported by the underlying audio, or subtle omissions.

Preservation and transparency: In September 2024, the King County Prosecuting Attorney’s Office sent law enforcement partners an email stating it would not accept police report narratives produced with AI assistance. The email identified multiple concerns, including CJIS compliance issues (the FBI’s security standards governing criminal justice data) for publicly available tools, the risk of hallucinations, and that Draft One “does not keep a draft of what it produces or what the officer fixed / added,” leaving no way later to prove what the AI wrote versus what the officer changed. That preservation gap has real litigation consequences because it directly affects impeachment, discovery, and the ability to litigate suppression or credibility disputes.

Red Flags:
Detecting the Invisible

Because the algorithmic step may be omitted from reports, look for the “investigative gap,” the moment police suddenly knew something they could not plausibly know from the raw materials disclosed.

Resolution Mismatch: The video is low-­resolution; the face is indistinct; the plate is unreadable; but a suspect is “identified” rapidly with no described method.

Narrative Mismatch: The report asserts confident “observation” of details (plate characters, precise facial features, identifying marks) that are not visible in the native media.

Temporal Gaps: Police recover video on Monday and “identify” a suspect on Tuesday, but no witnesses were interviewed in between. Ask: what happened during the gap?

Metadata Traces: Check filenames, export chains, and file properties in discovery (e.g., “enhanced,” “upscale,” “AI,” “remaster,” “stabilized,” “export,” unusual codecs, or unusual timestamps).

Too-­Perfect Tips: A “confidential tip” arrives that aligns perfectly with what an algorithm could have produced (a name from a blurry face, a plate from indecipherable footage). Treat the “tip” as a label, not a self-­proving fact, and demand the underlying basis.

Discovery Strategy:
Piercing the Veil

To defeat digital parallel construction, draft discovery to expose the missing step. Ordinary requests for “all reports” and “all videos” fail when the AI step is never documented in a report and the intermediate artifacts are not preserved by default. Discovery must be process-­focused, not just product-­focused.

Send Preservation
Demands Early

Before litigating, preserve. Send a written preservation demand covering:

original source media (native files, original exports, and any original cloud evidence links);

all derivative files created from the source media (enhanced versions, stabilized versions, upscaled versions, cropped versions, still frames, screenshots, and composites);

all metadata associated with the source and derivatives (hash values, timestamps, device identifiers, export logs);

the full chain of custody for the source media and all derivatives;

the identity of every person who handled, exported, processed, or modified the media;

the name of any software, tool, platform, or vendor used to process or analyze the media (including version numbers);

all settings, parameters, presets, filters, and workflows used (including “auto” settings and default presets);

all intermediate outputs (including failed or discarded outputs) and any notes reflecting why an output was selected;

any communications reflecting AI-­assisted identification (emails, texts, messages, case notes, taskings, vendor portal logs).

Use Targeted Interrogatories
and Requests

Do not ask: “Did you use AI?” Instead, ask:

“What tools or software were used to process, enhance, interpret, or analyze the source media, and by whom?”

“Describe each processing step in sequence, including the inputs and outputs.”

“Identify the tool’s version and the precise settings or presets used.”

“Identify what artifacts, logs, drafts, outputs, and audit trails the tool creates by default.”

“Identify what the agency did in this case to preserve those artifacts, logs, drafts, outputs, and audit trails.”

“If the system does not retain those materials, identify what is not retained, how it is ordinarily stored, and what steps were taken in this case to preserve it.”

That last point is critical. Some systems are designed so the “draft” disappears unless an agency affirmatively saves it. If you do not ask about retention and preservation explicitly, the most important evidence about how the narrative was created can be lost before you ever litigate it.

For face recognition, requests should include:

the complete candidate list returned (not just the “top hit”);

all similarity / confidence scores, thresholds, and ranking criteria;

the reference / gallery images used by the system;

the probe image(s) submitted and any preprocessing applied (cropping, sharpening, enhancement, normalization);

the date / time of each search;

any human review steps and the identities of reviewers;

communications conveying the “hit” and any instructions that followed;

policies, training materials, and limitations guidance provided to users;

validation materials, error-­rate discussions, and known limitations disclosed to the agency.

Demand the Audit Trail –
But Draft Realistically

If you suspect Draft One or a similar tool was used, request prompts, transcripts, draft outputs, edit histories, and logs, but draft with the real preservation problem in mind:

what inputs were provided to the system (audio files, transcripts, officer notes);

the generated draft(s) and timestamps;

any revisions, edits, or “regenerations”;

the final exported report and its creation history;

the tool’s retention policy and the agency’s actual retention in this case;

any audit logs reflecting who generated and edited the draft and when.

Request Policies,
Contracts, and NDAs

Hidden AI often persists because of vendor contracts, nondisclosure terms, and “trade secret” objections. Demand:

policies and training materials on AI, enhancement, face recognition, and report drafting;

procurement documents and contracts;

nondisclosure provisions restricting disclosure to courts / defense;

validation studies, error-­rate materials, and internal guidance describing limitations.

Even where the government resists disclosure, these documents support motions for in camera review, protective orders, compelled disclosure under court supervision, or remedies tied to nondisclosure.

Legal Challenges:
Suppressing the Fruit
(and Compelling Disclosure)

Once you expose hidden AI use, frame the litigation precisely. The strongest arguments focus on the government’s representation of the basis for the intrusion and the reliability and transparency of the identification step.

Franks (False Statements
and Material Omissions)

Franks v. Delaware allows a defendant to challenge a warrant affidavit that contains deliberate or reckless falsehoods (and, in many jurisdictions, deliberate or reckless material omissions) that are essential to the issuing judge’s probable-­cause determination. The usual framework matters. The defense must make a substantial preliminary showing to obtain a hearing, and ultimately must prove the falsity (or omission), the requisite culpable state of mind, and materiality. If the affidavit, corrected by excising the false statements and/or adding the material omissions, fails to establish probable cause, the warrant may be voided and suppression may follow for the warrant’s fruits (subject to jurisdiction-­specific doctrines and any asserted exceptions).

If an officer swore, “I observed the plate ABC-­123,” but in fact the plate emerged only after an enhancement process that interpolated or generated detail, the affidavit may misrepresent algorithmic output as direct human perception. Likewise, “I recognized the suspect” can be misleading if the true process was a face recognition candidate list followed by confirmatory police work that was then written up as the origin of the identification.

Material omissions may include the use of face recognition; the existence of a candidate list rather than a confirmed match; the thresholds used; known limitations or error rates; the fact that the probe image was low quality; or the fact that enhancement preceded the “observation.” The defense should focus on what an issuing judge would need to evaluate probable cause and reliability as well as what the affiant concealed.

The key analytic step is orthodox. Correct the affidavit and test what remains. Excise the claimed human “observation” if it was actually algorithmic output. Add the omitted algorithmic facts that bear on reliability. Then ask whether probable cause survives. If it does not, suppression generally follows for the warrant’s fruits, subject to jurisdiction-­specific doctrines and any asserted recognized exceptions (including attenuation, independent source, or inevitable discovery).

Fourth Amendment Reasonableness: Treat the Algorithm Like a Tipster

Whether the case involves a warrantless search or seizure or a warrant, the core Fourth Amendment question is reasonableness, probable cause or reasonable suspicion, and that inquiry turns on reliability. As a litigation strategy, treat algorithmic output like an informant tip. This framing leverages a framework courts already use for evaluating unverified leads – basis, reliability, corroboration – without requiring the court to adopt novel doctrine. Ask:

What is the basis?

What is the reliability?

What corroboration exists?

Was the corroboration independent, or did it simply confirm what the algorithm already suggested?

Did officers treat a probabilistic output as a fact?

Did the “clean” investigative steps exist only because the algorithm pointed to the defendant first?

An uncorroborated algorithmic “best guess,” especially when generated from degraded inputs, should not automatically become reasonable suspicion or probable cause. And even when corroboration exists, defense counsel should probe for confirmation bias. Once a ranked list exists, investigators may unconsciously interpret ambiguous later evidence as confirmation, because the suspect has already been selected by a machine.

Brady / Giglio and Trial Testimony: When the Clean Narrative
Becomes Misleading

The nondisclosure of AI use can enter Brady / Giglio territory when it produces misleading testimony or conceals impeachment evidence that is material and favorable. The most effective framing is a targeted argument that the prosecution cannot present a clean narrative that implies direct human perception when the true basis was algorithmic inference.

If a witness testifies in a way that implies direct human observation – “I saw,” “I recognized,” “I read the plate” – when the truth is “the system output a candidate list,” “the tool generated a clearer-­looking plate,” or “the draft narrative was generated from a model,” that discrepancy can be powerful impeachment material. It can also satisfy Brady materiality when the hidden step is essential to probable cause or when credibility is central to guilt.

This is an area where careful, case-­specific pleading is required. Some courts frame Brady primarily as a trial right tied to guilt / punishment; others are more receptive to pretrial disclosure arguments when credibility and the integrity of probable cause are at stake. In practice, defense counsel should plead disclosure duties under multiple grounds as applicable: constitutional due process (Brady / Giglio where material), Rule 16 / state discovery rules, public records statutes where available, and the court’s inherent authority to ensure a fair process. Do not overstate Brady. Instead, tie the nondisclosure to concrete prejudice, i.e., the inability to test the basis for the intrusion, inability to cross-­examine the identification process, inability to challenge reliability, and inability to impeach misleading testimony.

Conclusion

The threat facing criminal defendants extends beyond the overt introduction of AI-­enhanced evidence. The deeper danger is that AI will quietly perform the decisive identification step and then disappear. The algorithm never appears in any report, never appears in the affidavit, and never appears at a hearing. The defense is left to litigate a story in which identification was purportedly “obvious,” “immediate,” and “human.”

We cannot allow “magical identification” to become the new probable cause. When the raw materials do not support the claimed certainty, the defense must insist on the missing step. What happened during the gap? What tools were used? What was generated, and what was preserved? Often, the answer is a hidden algorithm.

And unlike a human witness, an algorithm cannot take an oath, cannot be meaningfully cross-­examined, and cannot be shamed into candor. If the government’s narrative relies on invisible machine inference, defense counsel must expose that inference through preservation demands, process-­focused discovery, and litigation that insists on reliability and truthfulness. Probable cause must be based on facts that can be tested, not on concealed black boxes.  

 

Sources: Axon Enterprise, Inc., Draft One (2024); Brady v. Maryland, 373 U.S. 83 (1963); Franks v. Delaware, 438 U.S. 154 (1978); Giglio v. United States, 405 U.S. 150 (1972); King Cnty. Prosecuting Att’y’s Off., Email to Law Enforcement Partners re AI-­Generated Police Report Narratives (Sept. 25, 2024); Parks v. McCormac, No. 2:21-­cv-­04021-­JKS-­LDW (D.N.J.) (ACLU-­NJ amicus brief filed Jan. 29, 2024); State v. Puloka, No. 21-­1-­04851-­2 KNT (King Cnty. Super. Ct. Mar. 29, 2024); Topaz Labs, Using Topaz AI for Legal Video Work, Topaz Cmty. F. (Mar. 5, 2024).

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

Prisoner Education Guide side
Advertise Here 3rd Ad
Federal Prison Handbook - Side