Handling AI-Generated Data as Evidence under Utah Criminal Procedure
How Utah courts treat AI-generated evidence, deepfakes, and algorithmic tools under Rules 901 and 702.
Artificial intelligence is now woven into everyday digital life predictive policing tools, automated narrative reports, deepfake videos, and AI-enhanced imagery. As these technologies show up in investigations and trials, Utah judges and attorneys face a new question: how should AI-generated data be handled as evidence in criminal cases? This guide explains how synthetic or algorithmic evidence fits under the Utah Rules of Evidence, what courts look for when deciding admissibility, and the key risks around deepfakes, bias, and due process.
Why AI-Generated Evidence Is Different in Utah Courts
Utah courts traditionally evaluate evidence that is created, observed, or recorded by people officers, witnesses, experts, and technicians. AI changes that. Synthetic or algorithmic outputs may be partially or fully machine-generated, difficult to verify, and shaped by proprietary code that no one in the courtroom actually wrote.
That creates problems for reliability and fairness. A judge deciding whether to admit AI-generated evidence will often focus on three questions: what the AI tool did, how it did it, and whether the parties can meaningfully question or challenge its output. These questions drive how Utah Rules of Evidence 901 (authentication) and 702 (expert testimony) get applied to AI data.
Authenticating AI Files under Rule 901
Rule 901 requires that the proponent of evidence show it “is what the proponent claims it is.” With AI-generated material, that can be much harder than with a standard photo or written report. The court will want clarity about who controlled the underlying data and which software produced the output.
In practice, authentication of AI evidence often turns on details such as:
- Source of the original data: where the raw video, audio, text, or images came from, and who had access to them.
- Software and settings used: the specific AI tool, its configuration, and the steps applied to the data.
- Chain of custody: how files were stored, transferred, and protected from tampering or loss of metadata.
Scenario example: a defendant claims a viral clip is a deepfake. Prosecutors must show how the video was obtained, who handled it, and that it has not been synthetically altered. Without a clear, well-documented chain of custody and technical explanation, a Utah judge may decide that Rule 901 authentication has not been met.
Expert Testimony & AI Reliability under Rule 702
Utah’s Rule 702 governs expert testimony. When AI tools are used to analyze data such as identifying a person in footage or ranking suspects courts typically treat the AI output as a form of expert evidence. The party offering that evidence has to show that the method is reliable and grounded in specialized knowledge.
Judges may consider questions like:
- Has the AI system been validated with real-world testing?
- Are known error rates or limitations disclosed?
- Can the defense independently test or review the underlying data?
- Is the system a “black box” where even the expert cannot explain how it reached its conclusion?
Scenario example: an agency uses an AI model to flag faces from surveillance footage. The state calls a digital-forensics expert to testify about the match. If that expert cannot explain how the model works, what data it was trained on, or how often it is wrong, a Utah judge may find the testimony too unreliable under Rule 702 and exclude it.
Due Process, Privacy & Deepfake Risks
AI-generated evidence also raises constitutional concerns. Defendants have a right to confront the evidence used against them and to challenge how it was created. When algorithms are secret or proprietary, that right can be difficult to exercise.
- Confrontation and disclosure: if the defense cannot access enough information about the AI system, they may argue that using its output violates due process.
- Surveillance and data collection: AI-enhanced tracking or facial recognition can magnify the scope of a search, raising Fourth Amendment questions about whether a warrant or additional safeguards were required.
- Prejudice versus probative value: realistic but misleading AI imagery such as deepfake videos or staged crime scene renderings—can unfairly sway a jury, even if the underlying data is weak.
Scenario example: a victim receives threatening audio messages in a voice that sounds like the defendant. Without expert analysis to show the clips are genuine and not AI voice clones, a Utah court may be wary of allowing the jury to hear them, given the risk of unfair prejudice.
Key Factors Utah Judges Watch When AI Evidence Is Offered
Because there is not yet a single “AI evidence statute,” Utah judges tend to apply existing rules with extra scrutiny. In many hearings, the discussion centers on a handful of recurring factors:
Scenario Breakdowns: How AI Evidence Issues Show Up
To see how these rules play out, it helps to look at common patterns that are emerging in criminal cases involving AI-generated or AI-assisted evidence:
- AI-enhanced surveillance video: police run blurry footage through enhancement software. The defense argues that the tool “invented” details and changed the image. The judge must decide whether the enhancement is a fair clarification of the original or an unreliable reconstruction.
- Predictive policing or risk scores: an algorithm suggests a suspect or labels someone as “high risk.” Without clear validation and disclosure, a court may treat such scores as too speculative to go before a jury.
- AI-drafted police narratives: an officer uses generative AI to summarize an incident report. If the officer cannot personally verify each statement or explain where certain language came from, the defense may attack the report’s credibility.
- Deepfake accusations: a party claims incriminating content is AI-generated—such as a fabricated video, image, or chat log. Digital-forensics experts may be needed on both sides to help the court decide what is genuine and what is synthetic.
Across these scenarios, the core theme is the same: Utah courts expect attorneys to understand the technology well enough to explain it, question it, and propose reasonable limits on how it is used.
Video & Social Learning Hub: AI Evidence
YouTube: AI Evidence & Digital Forensics
Need Help Understanding AI Evidence in Your Case?
AI-generated evidence is already reshaping Utah’s criminal courts, and the standards around authentication, reliability, and due process are still evolving. If AI tools, deepfakes, or algorithmic reports play a role in your case, talking with a Utah criminal defense attorney can help you understand how to explain that technology clearly and challenge it when it is unreliable.
Talk to a Utah AttorneyFor more plain-English legal guidance, stay updated with Utah Law Explained, explore our mission on the About Us page, or connect with trusted counsel like Gibb Law Firm.