Skip to content
AI for Justice

Where the algorithm meets the gavel: Appropriate uses of AI in courts

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 7 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 7 minute read

The collision between algorithm and gavel has fractured the traditional binary of admissible versus inadmissible evidence, competent versus incompetent practice, and appropriate versus inappropriate technology use in courts

Key insights:

      • AI use falls on a spectrum — Appropriate AI use hinges on which trial function it touches upon and how much it influences outcomes.

      • AI uses must align with duties — Administrative and preparatory uses should be aligned with lawyers’ duty of competence, with outputs being checked and used within existing ethical rules.

      • Context and timing control admissibility — Courts should assess tools on a case‑by‑case basis, weighing procedural stage, validation and error rates, expertise, and safeguards.


The integration of AI in the legal system is a complex and multifaceted issue, defying simplistic categorizations of right or wrong. Indeed, the application of AI in court is not a binary concept but rather one that exists on a spectrum. The appropriateness of AI use depends on two critical variables: i) which portion of the trial process is being impacted; and ii) the degree of impact that the AI usage has on the outcome.

What matters is not whether AI appears in a case, but which aspect of the trial proceeding the AI in question touches — research, drafting, evidence review, jury selection — and how deeply it may influence outcomes. A document-review algorithm that flags potentially relevant discovery operates at a vastly different point on this spectrum than an AI system that drafts legal arguments or predicts case outcomes.

Low‑impact assistance on routine tasks may be not only permissible but prudent, while high‑impact automation in fact‑finding or credibility assessments can quickly cross ethical or legal lines. Understanding this spectrum — and where a specific use case falls along it — is essential for maintaining ethical standards, preserving the integrity of our judicial system, and serving clients competently in an era in which technology is reshaping every corner of legal practice. For professionals navigating this terrain, it is important to consider where, how much, and with what guardrails AI is utilized.

Administrative applications and professional competence

Administrative applications of AI have gained widespread acceptance within the legal community. The Honorable Erica Yew of the Santa Clara County Superior Court observes that many preliminary research platforms now incorporate AI-enhanced features as standard functionality. These features have become so seamlessly integrated into legal practice that their use is not only appropriate but often expected, requiring little deliberation or justification from practitioners.

Dr. Maura R. Grossman, JD, PhD, a Research Professor in the School of Computer Science at the University of Waterloo, dives deeper into this conversation by discussing the use of AI to provide summaries and chronologies as a part of case preparation. She contends that while it still requires being checked by human lawyers, it is an appropriate use of AI.

Further, the deployment of AI tools in administrative contexts aligns directly with attorneys’ fundamental duty of competence. Judge Yew articulates this connection with clarity, noting that AI should be viewed through the same lens as previous technological innovations. “When looking at rules for appropriate AI, it is akin to the rules for social media or even stationary at their inception — they are all tools,” explains Judge Yew. “We need to make sure we know how to use them and use them within the rules already set for lawyers and judges.”

This perspective underscores a critical principle: AI represents an evolution in legal tool use rather than a departure from established professional standards. Just as attorneys were expected to master word processors and legal databases in previous decades, today’s competent practitioners must understand how to leverage AI effectively while adhering to existing ethical frameworks. The emphasis, naturally, remains on validity, reliability, efficiency, fairness, and compliance with professional responsibilities — all objectives that AI, when properly employed, can significantly advance. That is at the heart of the discussion around appropriate use of AI in legal settings.

Evaluating the impact: A spectrum of appropriateness

While AI has demonstrated clear value in streamlining administrative functions and preliminary case management — indeed, many practitioners increasingly expect its judicious application in these contexts — the deployment of AI avatars in judicial proceedings demands scrutiny. In fact, this appropriateness of such technology usage exists along a spectrum, contingent upon both the intended application and the procedural stage at which it is employed.

Two recent cases illuminate the boundaries of this spectrum. In Maricopa County, Arizona, a court authorized the use of an AI-generated avatar — in this case, an AI-generated video version of a deceased victim — during the victim-impact statement portion of sentencing proceedings. Conversely, a New York State Appellate Court categorically rejected the use of an AI avatar for oral argument presentation, deeming it fundamentally inappropriate for that forum under the circumstances presented.

While multiple variables distinguish these cases, a critical differentiator emerges: The procedural juncture at which the avatar would function. In these cases, this temporal dimension — when in the judicial process that AI intervention occurs — proves instrumental in determining whether such technology enhances or undermines the integrity of the legal proceedings.

The gray area in practice

A Florida criminal case saw a judge use AI-enabled virtual reality (VR) goggles to review evidence — an unprecedented move that highlights the challenges of integrating advanced technology into courtrooms. Supporters say immersive tools such as the use of VR can clarify crime scenes and improve fact-finding; critics counter that AI reconstruction may be inaccurate, biased, and unduly shape memory.

Again, the core issue is context. Admissibility and weight cannot be resolved by blanket rules. Courts must assess the specific technology, its validation and error rates, the expertise behind the reconstruction, and its safeguards against manipulation. Only rigorous, case-by-case scrutiny can balance innovation with the justice system’s bedrock commitment to fairness.

Indeed, this case-by-case framework becomes all the more essential when we consider how profoundly AI has transformed the nature of evidence itself. The Florida VR case exemplifies a broader epistemological challenge facing modern courts: technology no longer simply captures reality, rather it reconstructs, interprets, and in some instances, generates it. Where traditional evidentiary rules presumed a clear distinction between genuine documentation and fabrication, AI-enabled tools occupy an ambiguous middle ground that resists categorical treatment.

It is precisely this collapse of binary certainty that scholars like Dr. Grossman have identified as the defining evidentiary dilemma of our era, one that demands not merely procedural adjustments but a fundamental reconceptualization of how courts evaluate truth.

Dr. Grossman notes that this shows a critical shift in evidentiary standards for the digital age. Traditionally, photographic and video evidence was evaluated through a binary lens — either authentic or inauthentic. Today, however, AI-generated content has fundamentally altered this calculus because content can be altered in different ways, e.g., simple noise removal versus substantive changes.

Truth now exists on a spectrum, Dr. Grossman observes, now requiring courts to navigate unprecedented gradations of authenticity when determining admissibility.

Into the future of courts

As AI continues its inexorable integration into legal practice, the profession must resist the temptation of categorical acceptance or rejection, instead embracing a nuanced, context-sensitive approach that evaluates each application against the twin metrics of where in the procedural stage AI is used and what is its impact on the finder of fact’s decision.

The future of justice depends not on whether we permit AI in our courtrooms, but on our collective wisdom in distinguishing between AI-driven tools that enhance human judgment and those that threaten to supplant it. This critical distinction demands ongoing vigilance, rigorous validation, and an unwavering commitment to the foundational principles of fairness and accuracy that have long anchored our legal system.


You can find out more about the appropriate use of AI in legal proceedings in the Thomson Reuters Institute’s AI in Courts Resource Center

More insights