Skip navigation
The Habeas Citebook: Prosecutorial Misconduct - Header
× You have 2 more free articles available this month. Subscribe today.

When AI Invents the Pixels: Challenging AI-Enhanced Video Evidence in Criminal Cases

by Richard Resch

When prosecutors offer “enhanced” surveillance footage or body-camera video, defense counsel must understand what enhancement actually means. Traditional forensic methods such as adjusting brightness, applying contrast filters, or using established interpolation algorithms like nearest-neighbor or bi-cubic scaling operate directly on the captured data. Even when these methods upsample (increasing the video’s resolution by algorithmically adding pixels) an image, the new pixel values are determined by transparent mathematical formulas applied to the original pixels; they do not import external visual priors or invent new semantic content.

In contrast, modern AI “enhancement” tools are built on generative architectures – often generative adversarial networks (“GANs”), diffusion models, or related deep generative networks. These systems are trained on massive datasets and predict what missing or blurred details should look like based on patterns learned from other images. Technically, this is generative image restoration, not simple rescaling. The model is not recovering details that the sensor lost; it is hallucinating plausible detail that fits the learned prior.

A 2024 study published in the proceedings of the Conference on Neural Information Processing Systems (“NeurIPS”) formalized this risk. Researchers analyzed generative restoration models and proved a fundamental tradeoff between perceptual quality and uncertainty: the minimal achievable uncertainty of such models grows as perceptual quality improves. In fact, attaining “perfect” perceptual quality requires at least twice the inherent uncertainty of the underlying restoration problem. In other words, as generative models make images look more photo-realistic, they necessarily become more uncertain about the true underlying scene, and that uncertainty manifests as hallucinations, i.e., realistic-looking details that never existed in the ground-truth image.

This is not image sharpening. It is statistical prediction masquerading as enhancement. And when that prediction enters a courtroom, it is presented to jurors as if it were a clearer window onto reality, even though the clearest details may be algorithmically imagined. That is precisely why courts and forensic standards bodies have begun to treat generative “enhancement” as a fundamentally different category of evidence than traditional, transparent image processing.

The Admissibility Challenge

The fundamental problem with AI-enhanced evidence is opacity. Unlike traditional forensic tools whose operations can be replicated and verified, these systems function as black boxes. Defense counsel cannot examine the training data, cannot reproduce the enhancement process, and cannot meaningfully challenge the output. As Professor Brandon Garrett of Duke Law School and Professor Cynthia Rudin argue in their article in the Proceedings of the National Academy of Sciences, when a person’s liberty is at stake, only “glass box” systems that can be examined, interpreted, and validated should be employed. Black box AI poses heightened risks to both public safety and fundamental constitutional rights, particularly in criminal justice settings where data is often “noisy, highly selected and incomplete, and full of errors.”

Three distinct problems emerge. First, the training data problem: AI models learn from whatever images they were fed during development. If the training set contained demographic biases, the model will reproduce those biases in its predictions. The U.S. Department of Justice’s (“DOJ”) December 2024 report on artificial intelligence in criminal justice explicitly acknowledges this concern, recommending that AI-based forensic tools undergo demographic bias assessment and document their training data sources.

Second, the non-deterministic output problem: running the same video through the same AI tool can produce different results depending on initialization parameters and random number generation. This violates a core principle of forensic science, i.e., reproducibility. The Scientific Working Group on Digital Evidence (“SWGDE”), which sets standards for forensic video analysis, warns that machine learning techniques make it “challenging to identify what processes were applied to the imagery and replicate those steps with accuracy.”

Third, the validation problem: has the AI tool been tested against ground truth data? Can it distinguish between actual recorded details and hallucinated predictions? Most consumer-grade AI enhancement tools have never been validated for forensic use. The Federal Judicial Center’s 2023 guide for federal judges emphasizes that AI systems should be “tested and validated continuously” and that “accuracy depends on the quality and volume of data.” The guide instructs judges serving as evidentiary gatekeepers to probe whether AI applications are “authentic, relevant, reliable, and material” and whether their use is “consistent with the Constitution, statutes, and the Rules of Evidence.”

These problems are not theoretical. They formed the basis for the first major judicial ruling on AI-enhanced video evidence.

State v. Puloka:
The Landmark Ruling

In State v. Puloka, No. 21-1-04851-2 (King Cnty. Super. Ct., Mar. 29, 2024), a Washington trial court confronted precisely these issues in a triple-homicide prosecution. A bystander captured about 10 seconds of the shooting on an iPhone – blurry, low-resolution footage with motion blur. The defense hired an expert to “enhance” it using Topaz Labs’ Topaz Video AI and Adobe Premiere Pro.

The prosecution objected, and the court held an evidentiary hearing. The defense expert, a videographer and filmmaker with no forensic training, testified that Topaz Video AI used machine learning to “intelligently scale up the video” and add “sharpness, definition, and smoother edges.” Under cross-examination, he admitted he did not know what videos the AI was trained on, did not know whether the tool used generative AI, and could not explain the algorithm’s operation beyond describing it as opaque and proprietary.

The State’s expert, a certified forensic video analyst, testified that the AI enhancement increased the number of pixels in the original video by a factor of roughly 16 so that the vast majority of displayed pixels were generated by the algorithm rather than directly recorded by the camera sensor. Additionally, the tool “added information that was not in the original images” while “removing information that was in the original images.” He contrasted this with long-established forensic resizing methods that rely on transparent, reproducible algorithms such as nearest-neighbor, bi-cubic, and bi-linear interpolation – techniques described in SWGDE’s Fundamentals of Resizing Imagery and Considerations for Legal Proceedings as standard, mathematically defined approaches that forensic analysts can document, replicate, and explain to courts. He explained that machine-learning-based upscalers operate through complex, data-driven processes that even their developers may not fully understand and that are not readily reproducible by independent forensic examiners.

The forensic expert testified that SWGDE – whose members include federal, state, and local law-enforcement agencies engaged in forensic video examination – has cautioned against the use of AI enhancement tools like Topaz Video AI in forensic casework and has not endorsed them for courtroom use. He stressed that the relevant scientific community is not the broader “video production community” that uses such tools to create commercial or entertainment content but the forensic video analysis community, which must meet evidentiary standards of reliability, transparency, and reproducibility.

The court ruled that AI video enhancement of the kind at issue is a novel technique, triggering the analysis under Frye v. United States, 293 F. 1013 (D.C. Cir. 1923),that the proponent demonstrate general acceptance in the relevant scientific community. It held that the relevant community is the forensic video analysis community, not commercial video producers. On the record before it, the court concluded that Topaz Video AI “has not been peer-reviewed by the forensic video analysis community, is not reproducible by that community, and is not accepted generally in that community.” The defense offered no appellate decisions from any jurisdiction approving generative-AI-enhanced video at trial, no scientific publications supporting the reliability of such enhancements in a forensic context, and no evidence of validation testing.

The court further concluded that the AI-enhanced video did not satisfy Washington Evidence Rule 702’s requirement of reliable, helpful expert testimony because the process did not yield a transparent, validated representation of what the camera captured. In addressing Washington Evidence Rules 401 and 403, the court wrote that the AI output “does not show with integrity what actually happened but uses opaque methods to represent what the AI model ‘thinks’ should be shown,” and ruled that any marginal relevance was substantially outweighed by the risk of unfair prejudice, jury confusion, and a “trial within a trial” over the workings of the model. As forensic video expert and legal educator Jonathan Hak has observed in his analysis of Puloka, the case underscores a critical distinction often missed: production-oriented video experts seek to create visually pleasing content, while forensic experts must generate court-ready evidence that meets rigorous standards of reliability and reproducibility.

The case proceeded to trial using only the original, unenhanced video, along with other evidence. The jury convicted the defendant in May 2024, and he was sentenced to life without parole on June 21, 2024. No appellate court has reversed the trial court’s evidentiary ruling. As of this writing, there are no published U.S. decisions approving generative-AI-enhanced video as trial evidence. Puloka remains the leading, and so far uncontradicted, trial-court decision on this issue.

Discovery and Motion Practice

When prosecutors disclose “enhanced” video, defense counsel must first determine whether AI enhancement was used, then systematically demand the information necessary to challenge it.

Identifying AI Enhancement:
Visual Indicators

Because defense counsel rarely has access to the proprietary source code driving these tools, they must learn to identify the visible artifacts that generative AI often leaves on footage. The most common indicator is the “oil painting effect,” where the model aggressively removes noise, rendering skin and fabric with an unnaturally smooth, waxy texture that lacks natural grain. Counsel should also scrutinize the video for “temporal inconsistencies” – tattoos, logos, text, or jewelry that appear to morph or shift shape from one frame to the next as the AI independently re-predicts the object’s appearance for each frame. Other red flags include “impossible geometry,” such as eyeglasses melting into a face, fingers that split or fuse, or background lines that disappear behind a subject but fail to re-emerge on the other side. Perhaps most telling is the insertion of hallucinated details: high-definition white teeth appearing in what was originally a blurry, closed mouth, or sharp facial features emerging from footage too degraded to have captured them.

It is important to note that the absence of visible artifacts does not establish that video is unenhanced. Subtle processing or lower-intensity enhancement may leave no obvious traces. These indicators should prompt further investigation and, where resources permit, consultation with a qualified forensic video analyst, but they are not a substitute for expert analysis.

Discovery Demands

Once AI enhancement is suspected or disclosed, do not accept only playable export files. Insist on native files in their original format, complete with metadata. File discovery motions should seek:

The Complete Processing Chain: What software was used? What specific version? Which settings and parameters were selected? Were multiple attempts made with different settings? If so, why was this particular output chosen over others?

Activity Logs and Project Documentation: Request any available activity logs, project files, presets, or export settings showing exactly what operations were performed. If the prosecution cannot produce documentation of the enhancement process, argue that they have not laid a proper foundation for authentication under Federal Rule of Evidence 901 (or state equivalents) and that the methodology cannot be meaningfully evaluated for reliability under Rule 702.

Training Data Information: What dataset was the AI model trained on? Were demographic groups represented proportionally? Were faces from the defendant’s demographic background included in the training data? The DOJ’s December 2024 report on AI in criminal justice explicitly calls for demographic bias assessment and documentation of training data and model design for AI tools used in forensic analysis. Demand documentation that any AI enhancement tool was subjected to such bias assessment and that its training data and development process have been disclosed to the extent possible.

Validation Testing: Has this specific tool been validated for forensic use? What error rates were established? Were validation tests performed on video similar to the case footage (lighting conditions, resolution, motion blur)? National Institute of Standards and Technology / Organization of Scientific Area Committees for Forensic Science and SWGDE guidance consistently call for validation and tool testing before use in casework. If comparable validation has not been performed for an AI enhancement tool, defense counsel should argue that the proponent cannot satisfy Rules 702 and 901 and that the court should exclude the enhanced video or, at a minimum, give it little weight.

Alternative Processing: SWGDE standards identify accepted, transparent interpolation methods. If the prosecution used generative AI instead of established forensic techniques, demand to know why those accepted methods were not used and whether conventional enhancement or resizing produced materially different, and potentially less inculpatory, results.

Navigating the Best Evidence Rule

Defense counsel must navigate the Best Evidence Rule with precision when challenging AI-enhanced video. An imprecisely framed objection invites summary dismissal; a precisely framed challenge compels the prosecution to litigate on grounds far less favorable to admissibility.

The “Duplicate” Trap:Under Federal Rule of Evidence 1003 (and most state analogues), a “duplicate” is admissible to the same extent as an original unless a genuine question is raised regarding the original’s authenticity or the circumstances render it unfair to admit the duplicate in lieu of the original. FRE 1001(e) defines a duplicate as a counterpart produced by a mechanical, photographic, or electronic process that “accurately reproduces the original.”

If defense counsel merely argues that “the AI enhancement violates the Best Evidence Rule,” the prosecution will likely counter: “Your Honor, the State has produced the original raw file. The rule is satisfied. The enhancement is merely a duplicate or a visual aid.” Courts frequently accept that Article X is satisfied once the raw file is available, shifting the adjudicative focus to reliability rather than the Best Evidence Rule itself.

The Stronger Argument – Challenging the “Duplicate” Classification: The more robust legal challenge posits not that every AI enhancement automatically violates the Best Evidence Rule but that certain generative enhancements fail the definition of “duplicate” entirely and are therefore unfair to admit under Rule 1003. A duplicate must “accurately reproduce” the original. While traditional digital clarification, e.g., adjusting brightness or contrast, often satisfies that standard by preserving the captured data, modern generative upscaling tools, such as those at issue in Puloka, exceed the boundaries of clarification.

These tools upsample low-resolution footage by predicting and synthesizing detail that was never recorded, often altering or suppressing information present in the source video. In Puloka, the expert testified that the Topaz Video AI process increased the pixel count by roughly a factor of 16, indicating that the majority of the visual information displayed was algorithmically synthesized rather than directly captured. Such an output is not a counterpart that “accurately reproduces the original.” It is more accurately characterized as an expert-generated reconstruction or simulation derived from the original.

Counsel Should Object as Follows: “Your Honor, this AI-generated output is not a ‘duplicate’ under FRE 1001(e). It fails to faithfully reproduce the original frame-by-frame; rather, it generates new pixel-level detail through statistical prediction while discarding data from the source file. It is not a copy of the video; it is a computer-generated reconstruction that must be treated, and scrutinized, as such.”

Why This Matters: Absent classification as a true duplicate, Rule 1003 provides no automatic presumption of admissibility. The proponent must instead justify the AI output under an alternative theory, most naturally as expert-generated evidence under FRE 702 and 901(b)(9), in some cases as a summary of voluminous evidence under FRE 1006, or as a demonstrative exhibit under Rule 611(a). Each of these procedural routes requires the court to evaluate whether the underlying method is reliable and whether the exhibit fairly and accurately represents what it purports to show, rather than simply assuming admissibility as a “duplicate.”

This procedural posture favors the defense. If the prosecution offers the enhancement as expert evidence, Daubert / Frye analysis applies: What is the underlying methodology? Is it generally accepted in the relevant forensic community? Has it been validated on similar tasks? What are the known error rates? If the prosecution instead characterizes the output as a “demonstrative” or “visual aid,” the governing standard remains that demonstrative evidence must fairly and accurately represent what it purports to show. A video that inserts invented detail and removes recorded information fails to fairly represent the original capture and risks misleading the trier of fact. In either scenario, the inquiry shifts from “Is this a copy?” to “Is this a reliable and non-misleading representation?”

Importantly, counsel must explicitly state that the defense does not seek to suppress the underlying recording. Counsel should embrace the raw video as the “best evidence” of the camera’s actual capture, establishing a clear record that the objection applies solely to the AI-generated overlay being presented as a faithful reproduction rather than a speculative reconstruction.

The Strategic Imperative: Litigating the issue as a “copies” dispute invites a quick denial once the prosecution produces the original file. Litigating on “reliability” grounds – invoking Daubert / Frye, FRE 702, FRE 403, and the fairness limitation inherent in Rule 1003 – compels the prosecution to defend the AI system itself.

This is precisely what occurred in Puloka. The court, applying Frye and its counterparts to Rules 702 and 403, excluded a Topaz-enhanced video because the process added “false image detail,” relied on opaque, non-peer-reviewed algorithms, and lacked general acceptance in the forensic community. The unaltered source video remained the best evidence of the event. The AI-generated version failed to satisfy the evidentiary threshold.

The goal is to shift this evidentiary burden when the prosecution offers AI enhancements. Counsel must not concede that the output is a “duplicate.” Instead, the defense must force the prosecution to prove, via competent expert testimony and validation data, that the system does not introduce false structure or fabricated detail and that the specific enhancement is a reliable, transparent, and forensically accepted representation of the underlying capture. Only upon such a showing should the evidence be presented to the jury.

Additional Evidentiary Arguments

The “Silent Witness” Challenge: Enhanced video is not merely illustrative of witness testimony; it purports to be the event itself. But if an AI upscaling process has increased the pixel count 16-fold so that the vast majority of the visible detail consists of algorithmically generated pixels rather than directly recorded ones, can it truly function as a “silent witness” to what occurred? Argue that AI enhancement transforms authentic evidence into something else entirely: a probabilistic reconstruction that lacks the trustworthiness and process reliability required for silent-witness treatment, because what the jury sees is largely what the model predicts should be there, not what the camera actually captured.

Digital Spoliation: If the original video was overwritten or discarded after “enhancement,” argue that the prosecution destroyed potentially exculpatory or materially important evidence. The original recording is the best evidence of what the camera actually captured. Without it, the defense cannot test alternative enhancement methods, evaluate whether the AI introduced hallucinated detail, or have its own expert conduct an independent analysis. That loss prejudices the defense’s ability to challenge the AI-generated content and supports sanctions or exclusion under spoliation and due process principles.

Cross-Examination Strategy

When prosecutors call an expert who enhanced video using AI, the cross-examination should expose the fundamental disconnect between production and forensic standards. Here is a strategic approach, incorporating lessons from Puloka.

Establish Credentials Carefully: “You identified yourself as a videographer and filmmaker, correct? And your work focuses on creating content for commercial or entertainment purposes? You have no certification as a forensic video analyst, do you? You’re not a member of the Scientific Working Group on Digital Evidence?”

Expose the Pixel Generation: “You stated this software ‘enhanced’ the video. How many pixels did it add to the original?” [In Puloka, approximately a 16× increase in the number of pixels per frame.] “Those additional pixels – did they exist in the camera’s original capture?” [No.] “So if you increase the number of pixels by a factor of 16, that means that for every pixel location the camera actually recorded, the software is creating roughly 15 new pixel locations the jury will see? And would you agree that when software adds pixel detail that wasn’t captured by the sensor, it’s making predictions about what it thinks should be there?”

Attack the Training Data: “What dataset was this AI trained on to make these predictions?” [Expert in Puloka didn’t know.] “Can you tell the jury whether this AI was trained on faces from my client’s demographic background?” [Expose potential bias.] “Were you provided with any documentation showing this tool was tested for racial or demographic bias, as the DOJ now recommends for AI-based forensic tools?”

Challenge Opacity: “Can you provide any activity logs, project files, presets, or export settings showing exactly which algorithms and parameters you used for this enhancement?” [Puloka expert could not.] “Can you explain to the jury how the algorithm decided which details to add and which to smooth out in this specific frame?” [Expose that the expert cannot explain the process.] “You described the algorithm as ‘opaque and proprietary,’ correct? So neither you, nor the prosecution, nor defense counsel, nor this jury can examine how it actually made those decisions?”

Establish Lack of Forensic Acceptance: “Is the Scientific Working Group on Digital Evidence, which includes forensic video experts from federal, state, and local agencies, aware of this tool?” [SWGDE has issued cautions about AI enhancement, not endorsements.] “Has Topaz Video AI been peer-reviewed by the forensic video analysis community?” [No in Puloka.] “Has it been accepted by that community for use in legal proceedings?” [No.] “Are you aware that SWGDE has published guidance warning that with novel, machine-learning-based interpolation, it can be ‘challenging to identify what processes were applied to the imagery and replicate those steps with accuracy’?”

Contrast With Accepted Methods: “You’re familiar with standard forensic interpolation methods like nearest-neighbor, bi-cubic, and bi-linear scaling?” “Did you compare your AI enhancement against those established techniques?” [If no: why use an unvalidated method over accepted ones? If yes: why choose the less-accepted method anyway?] “The Federal Judicial Center’s AI guide instructs judges to ask whether AI applications have been tested, validated, and shown to be reliable before admitting them – are you aware of any published validation testing for this tool in forensic video applications?”

Establish Hallucination Risk: “Is it possible this software ‘hallucinated’ details, created content that looks plausible but wasn’t actually there in the original recording?” [If the expert denies or minimizes:] “You’re aware that computer scientists have shown, including in a 2024 NeurIPS paper on generative restoration models, that as these systems are tuned to produce more photo-realistic images, the minimal uncertainty about the true underlying scene necessarily increases, correct? So making an image look better with generative AI doesn’t necessarily make it more faithful to what the camera actually recorded, does it?”

Close With Reasonable Doubt: “So when this jury looks at this ‘enhanced’ video, they’re not just seeing what the camera actually recorded; they’re seeing what an algorithm predicted should be there based on patterns it learned from other images. Fair to say?”

The Frye / Daubert Hearing

In jurisdictions following Frye, the issue is straightforward. Has AI video enhancement achieved general acceptance in the relevant scientific community? In Puloka, on the record before it, the court answered “no.” The proponent bears the burden to establish acceptance through expert testimony, published literature, widespread use in the forensic community, and legal authority from other jurisdictions. At present, there is no published forensic literature, standards-body guidance, or case law establishing general acceptance of consumer generative upscaling tools, such as those used in Puloka, for forensic video enhancement.

The threshold question is identifying the relevant community. Defense should argue that the community is forensic video analysts – not video producers, not software developers, not the general “AI research community.” The Federal Judicial Center’s AI guide instructs judges to probe the specific scientific field relevant to the evidence at issue. Forensic video analysis is a distinct discipline with its own standards, training requirements, and quality controls. SWGDE, which speaks for this community, has cautioned against using opaque, machine-learning-based enhancement tools in forensic casework and emphasizes reproducibility, documentation, and transparency.

The absence of peer review weighs heavily against admissibility, particularly for novel techniques like generative video enhancement. SWGDE standards undergo extensive public comment and consensus development. Topaz Video AI and similar consumer upscaling tools have not been submitted to that process. There is no meaningful body of peer-reviewed forensic literature validating their use in casework, and no recognized standards organization has approved them for forensic enhancement. Testimony in Puloka acknowledged that these tools could not be reproduced within the forensic community, a basic requirement for scientific evidence that can be independently verified and challenged.

In jurisdictions applying Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), the analysis is similar but more flexible. Courts consider: (1) whether the technique can be tested, (2) whether it has been subjected to peer review, (3) the known or potential error rate, (4) the existence of standards controlling the technique’s operation, and (5) general acceptance. In its current form, generative video upscaling of the type used in Puloka struggles under each factor.

Testing and validation are essentially absent from the forensic record. No publicly available forensic validation studies establish error rates for these consumer tools when applied to evidentiary video. How often do they hallucinate faces? How frequently do they misrepresent clothing details, hand positions, or weapon presence? The answers are unknown. Peer review in the forensic community is minimal to nonexistent. Computer science papers may discuss related algorithms in research settings, but forensic application is different. The Federal Rules of Evidence and SWGDE guidance require validation for the specific use case, legal proceedings where liberty is at stake, not just proof-of-concept demonstrations.

Error rates, in practice, are opaque to the courts. Because the algorithms are proprietary and continually updated, and because settings and versions may change over time, even nominally “similar” runs may not yield identical results. That lack of transparency and reproducibility makes it impossible for judges or jurors to meaningfully evaluate how often these systems are wrong or in what direction. How can jurors assess reasonable doubt when the “evidence” itself is probabilistic and its error profile is undocumented?

Standards are likewise lacking. SWGDE has not yet adopted standards for generative AI “enhancement” and instead urges caution, stressing the use of well-understood, reproducible methods such as nearest-neighbor, bi-cubic, and bi-linear interpolation. However, SWGDE has published detailed standards and best practices for acceptable digital video enhancement and authentication, and generative upscaling is not among the endorsed techniques.

General acceptance is absent. Puloka concluded, on its record, that generative AI video enhancement had not achieved general acceptance in the forensic video analysis community, and as of this writing, no published U.S. decision has held that such tools satisfy Frye or Daubert. The National Association of Criminal Defense Lawyers has created a task force studying AI’s impact on criminal justice, reflecting the defense bar’s recognition that AI poses serious risks and challenges to fair trials. The American Bar Association has likewise published analyses questioning the reliability and authenticity of AI-generated and AI-altered evidence. No professional forensic organization endorses generative AI as an accepted method for evidentiary video enhancement.

Even if such evidence is somehow admitted, Federal Rule of Evidence 403, or state equivalents, provides an independent basis for exclusion. The danger of unfair prejudice substantially outweighs probative value. Jurors naturally believe their own eyes, but here their eyes are being asked to trust a synthetic image. What appears to be photographic evidence is algorithmic prediction. The “CSI Effect” – jurors’ inflated expectations about forensic evidence based on television – becomes weaponized when AI makes poor-quality video look professionally enhanced. The risk that jurors will overvalue this evidence is extreme.

Conclusion

The evidentiary challenge posed by generative AI enhancement is a challenge to how courts apply the proof beyond a reasonable doubt standard. Prosecutors bear the burden of proving every element beyond a reasonable doubt. That burden should not be deemed satisfied with evidence whose very composition is uncertain – evidence that may contain algorithmically fabricated details indistinguishable from recorded reality.

This implicates foundational constitutional guarantees. The Confrontation Clause presupposes that the accused can meaningfully challenge the evidence against him. But how does one cross-examine an opaque algorithm? The Due Process Clause requires fundamental fairness. There is nothing fair about presenting jurors with synthetic imagery as photographic proof while concealing that the sharpest details may be statistical inventions.

Defense counsel must challenge AI-enhanced digital evidence with precision. The arguments are available: the enhancement is not a “duplicate” under Article X because it does not accurately reproduce the original; it constitutes expert-generated evidence requiring Daubert or Frye analysis; it fails Rule 702 because the methodology is opaque and unvalidated; it violates Rule 403 because jurors will trust synthetic clarity as photographic truth.

For pro se litigants and appointed counsel with limited resources, the core objection is simple. The prosecution is asking the jury to convict based on what a computer program predicts should have been recorded, not what the camera actually recorded. Enhancement that fabricates detail is not clarification; it is confabulation. And confabulation, however photorealistic, cannot be treated as proof beyond a reasonable doubt.  

 

Sources: State v. Puloka, No. 21-1-04851-2 (King Cnty. Super. Ct. Mar. 29, 2024); U.S. Dep’t of Just., Artificial Intelligence and Criminal Justice: Final Report (Dec. 3, 2024); Fed. Jud. Ctr., An Introduction to Artificial Intelligence for Federal Judges (2023); Nat’l Ass’n of Crim. Def. Law., Resource List: AI, Due Process, and Scientific Evidence (2025); Nat’l Ctr. for State Cts., AI-Generated Evidence: A Guide for Judges (2024); Sci. Working Grp. on Digit. Evid., SWGDE Best Practices for Digital Video Authentication, Doc. No. 23-V-001-1.2 (Mar. 7, 2024); Sci. Working Grp. on Digit. Evid., SWGDE Fundamentals of Resizing Imagery and Considerations for Legal Proceedings, Doc. No. 22-V-001-1.1 (Sept. 22, 2022); Sci. Working Grp. on Digit. Evid., SWGDE Overview: Artificial Intelligence Trends in Video Analysis, Doc. No. 20-V-001-1.0 (Jan. 14, 2021); Daniel J. Capra, Deepfakes Reach the Advisory Committee on Evidence Rules, 92 Fordham L. Rev. 2491 (2024); Regev Cohen et al., Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models, 38 Advances in Neural Info. Processing Sys. (2024); Rebecca Delfino, Deepfakes on Trial 2.0: A Revised Proposal for a New Federal Rule of Evidence to Mitigate Deepfake Deceptions in Court (Loyola L. Sch. L.A. Legal Stud. Rsch. Paper No. 2025-10, 2025); Brandon L. Garrett & Cynthia Rudin, Interpretable Algorithmic Forensics, 120 Proc. Nat’l Acad. Scis. e2301842120 (2023); Brandon L. Garrett & Cynthia Rudin, The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice, 109 Cornell L. Rev. 561 (2024); Jonathan W. Hak, AI Enhanced Video Ruled Inadmissible in US Court, Jonathan Hak KC PhD (Apr. 17, 2024).

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

CLN Subscribe Now Ad 450x600
Advertise here
CLN Subscribe Now Ad 450x600