Skip to content
AI for Justice

Reimagining justice: How judges are using AI thoughtfully and responsibly

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Judges are turning to AI not to automate decisions but to enhance clarity, efficiency, and access to justice. With deliberate guardrails, peer-led learning, and a commitment to transparency, judicial leaders are shaping AI's role in ways that uphold integrity while meeting the demands of modern courts

Key insights:

      • AI augments judicial judgment without replacing it — Used thoughtfully it clarifies reasoning and improves access.

      • Strict guardrails are needed — These can include structured prompts, anonymized data, and rule-based outputs helps interrupt bias and maintain integrity.

      • Judges should lead — They can do this through peer learning and education, which fosters responsible use while preserving public trust.

The integration of AI in the judiciary is gaining momentum, offering a promising solution to the growing caseloads, access-to-justice gaps, and public trust challenges faced by courts across the United States. And as the judiciary explores the potential of AI, a crucial conversation is emerging — one that highlights the importance of responsible and thoughtful adoption.

A recent webinar, How judges are using GenAI, presented by the AI Policy Consortium — a joint effort by the National Center for State Courts (NCSC) and the Thomson Reuters Institute (TRI) — shed light on the experiences of early adopters of generative AI (GenAI) in the judiciary. In the webinar, Prof. Amy Cyphert of West Virginia University and U.S. Magistrate Judge Maritza Dominguez Braswell of the District of Colorado shared their insights from their own use of AI, emphasizing the need for a deliberate and informed approach.

The role of AI in judicial decision-making

A common fear is that AI will somehow take over the position of final arbiter in court proceedings. However, judges are not interested in having AI displace their judgment; rather, they see AI as a tool that augments and helps advance justice, not a tool that replaces decision-making or human judgment.

Judges also are not rushing into AI use. Instead, they are approaching it with a deep commitment to responsible use and a desire to increase, not decrease, public trust. “Everybody on that spectrum — from ‘I’m just learning’ to ‘I want to be a power user’ — says, ‘But I want to do it right,’” says Judge Braswell.

AI can also help judges close communication gaps. By taking decisions that judges have already reasoned through and converting them into accessible explanations, AI can help all litigants clearly understand the relevant legal framework, rule, or process behind the decision. This is even more impactful in cases involving self-represented litigants.

Leveraging AI to enhance judicial communication

Judge Braswell understands this well. In every case with at least one self-represented litigant, she offers a plain language summary of her written decisions. Although she does not use AI to draft those, she does use AI to translate complex legal reasoning when delivering information from the bench.

“If I have 15 minutes for a hearing and want to explain to a self-represented litigant something complex, I use AI to help me translate legal jargon into plain and simple language,” she explains. “I want the self-represented litigant to understand what I’m doing and why I’m doing it — and AI helps me translate lawyer-speak into plain-speak, quickly.”


You can explore the white paper Judicial Use of Generative AI: Lessons Learned here


This capability is particularly valuable for judges who often struggle to find the time to connect with litigants. By leveraging AI, they can provide more personalized and informative interactions, ultimately enhancing litigants’ judicial experiences. In addition, some judges are using AI to create engaging content, such as avatars and videos on YouTube, to make themselves more relatable and accessible to the public; while others are using AI to help litigants navigate court processes, helping to demystify the system and reduce anxiety.

Guardrails for responsible AI use

Of course, Judge Braswell doesn’t use AI casually. She has strict policies and protocols in place, including segregation of work and personal accounts, prompt anonymization, and prohibiting her clerks from uploading sensitive information or delegating core functions and judgment to any AI tool. She also trains her chambers on high-risk and low-risk cases and emphasizes the importance of proper AI use through structured prompts, appropriate settings, standing instructions, and deliberate guardrails.

For example, Judge Braswell describes a dedicated project in which she uploaded her district’s local rules, the Federal Rules of Civil Procedure, and standing orders. She queries that project any time she needs to refresh on an applicable rule or procedure. She gave the AI tool clear instructions, such as: Don’t answer unless grounded in a rule. Cite the rule with every response. If you don’t know, say so.

While these types of practices do not make the tools risk-free, Judge Braswell notes, they do offer guardrails to help support, rather than undermine, judicial integrity.

Addressing risks and challenges

While hallucinations dominate headlines, the deeper risks in AI use in the courts are bias, cognitive deskilling, and erosion of public trust. Judge Braswell warns that bias is harder to detect than any made-up case citation. “If you ask for a legal framework in an employment discrimination case, the system may pull more from defense-side articles because larger firms publish more content,” she explains. “The result is a subtle tilt in perspective.”

To counter this, she prompts her AI tools deliberately asking for diverse perspectives, asking the tool to gather contrary views, or telling the tool to answer only after asking follow-up questions that could identify user bias. Without this intentionality, bias can go undetected.


For judges ready to engage, visit AIForJudges.com to join the conversation


On the webinar, Prof. Cyphert echoed concerns about the next generation. “I worry that younger lawyers may skip critical learning processes if they rely too heavily on AI for drafting or research,” Prof. Cyphert says. “Is there a cognitive benefit to writing that we’re losing?”

The path forward through education, experimentation & transparency

During the webinar, both speakers rejected mandatory disclosure rules as counterproductive.

“It creates a chilling effect,” Judge Braswell says. “And we need people to engage for learning purposes.” Instead, she notes that she advocates for voluntary transparency — judges explaining their use of AI in ways that build public understanding and confidence.

Prof. Cyphert agrees. “You can’t assess risks and benefits if you don’t understand the technology,” she says, adding that she encourages judges to attend webinars, read research, and talk to peers. Similarly, Judge Braswell co-founded the Judicial AI Consortium, a judge-only, peer-led forum for candid discussion that exists as a safe space to share challenges, test ideas, and learn together.

As the webinar notes, the future of justice isn’t just about whether courts and judges are using advanced AI technology, it’s about how that technology should be used — with care, purpose, and always with people at the center.


For more on the impact of AI in courts, visit the TRI/NCSC AI Policy Consortium for Law & Courts

More insights