Skip to content
AI Governance

AI literacy: The courtroom’s next essential skillset

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Courts are embracing AI literacy as essential infrastructure, not optional training — to that end, judicial systems should develop role-specific education frameworks that balance policy guardrails with hands-on experimentation

Key insights:

      • AI literacy is role-specific and essential — Courts need to move beyond general AI conversations and focus on concrete, role-based strategies that support AI readiness.

      • Balanced AI adoption is crucial — The goal for courts is not to automate blindly but rather should adopt a balanced AI-forward mindset.

      • Ongoing education and adaptability are vital — AI literacy requires continuous learning and upskilling that focus on building managers’ comfort and capability to lead their teams.


For today’s court system, AI literacy is quickly becoming a core professional skill, not just a technical curiosity. In the recent webinar AI Literacy for Courts: A New Framework for Role-Specific Education, panelists emphasized that courts need to move from holding abstract conversations around AI to enacting concrete, role-based strategies that support judicial officers and court professionals throughout their AI journey.

The webinar is part of a series from the AI Policy Consortium for Law & Courts, a joint effort by the National Center for State Courts (NCSC) and the Thomson Reuters Institute (TRI).

The need for AI literacy is great

Courts are being urged to treat AI literacy as a foundational pillar of AI readiness, not as an optional add-on training. AI literacy is “the knowledge, attitudes, and skills needed to effectively interact with, critically evaluate, and responsibly use AI systems,” said the NCSC’s Dr. Andrea Miller, adding that it cannot be one-size-fits-all. “The important thing to know about the definition of AI literacy is it’s going to be different for every single personnel role.”

Building a serious AI literacy strategy therefore begins with defining what success looks like for each role, and then aligning recruitment, training, and evaluation practices around those expectations.

To support this, policy and security concerns must come before (and alongside) AI use. Webinar panelist Nancy Griffin, Chief Human Resources Officer at Los Angeles County Superior Court, described how the court started by clarifying the sandbox for safe AI use. First, the court’s generative AI (GenAI) policy sets parameters, such as prohibiting staff from using court usernames or passwords to create accounts on external AI tools. Only then, after those guardrails were in place, did the training really lean into the technical how-to of writing prompts and experimenting with tools. Policy development and skills development happened in tandem, Griffin explained.

To make space for learning in an already overloaded environment, her team lit a creativity spark with managers first, she said, giving them concrete use cases — such as drafting performance evaluations, coaching documents, and job aids. As a result, these managers, in turn, feel motivated to create room for their teams to experiment.

This, Griffin added, is all anchored in a clear, people-centered message from leadership: “We have a lot of work to do, and not enough people to do our work — and so AI is going to help us serve the court users and help us provide access to justice.”


You can register for upcoming webinars from the TRI/NCSC AI Policy Consortium here


How to make AI “work”

On the webinar, the conversation repeatedly returns to what lawyers and court professionals are actually doing with AI tools today and where they’re getting stuck. Jennifer Leonard, Founder of Creative Lawyers, noted that despite AI’s rapid advance, many professionals are still at a surprisingly basic stage in how they use it. For example, Leonard said that users tend to treat AI as a one-way question-and-answer box instead of using it as an expertise extractor that asks them targeted questions. To combat this, she suggested that users ask AI to ask them questions to extract from their expertise.

When thinking about how to interact with AI generally, users should treat it like a smart colleague and ask themselves (and implicitly the AI) these questions:

      • What information would this colleague need from me to do the assignment well?

      • What questions would I want them to ask me?

      • What specific task do I actually want them to execute?

      • What feedback would I give them to make the work product better?

As the webinar examined, leadership messaging needs to be explicit. AI is being adopted to augment human work, reduce burnout, and expand access to justice — not to eliminate jobs, particularly in courts that are already understaffed. For example, LA Superior Court has been meeting with unions around their GenAI policy, repeatedly affirming that they are not using AI to replace court staff, Griffin said. Instead, they show how AI can be used to demonstrate use cases, and offload repetitive tasks that will make remaining work more meaningful.

At the same time, managers themselves often feel unprepared to talk about AI, which is why building their comfort and capability — especially around explaining where the court is going — is becoming a critical managerial competency, panelists noted.

Supporting the journey

To support all of this, the TRI/NCSC AI Policy Consortium has built practical training resources that courts can plug into their own strategies. For example, the role-based learning resource platform offers curated materials mapped to specific roles such as judges, court administrators, court reporters, clerks, and interpreters. Courts can use these resources as targeted supplements when rolling out AI projects to better prepare staff members who are just starting their AI journey.

Complementing this is the AI Sandbox, an environment in which staff can safely experiment with GenAI tools without sending data back to the open internet. This gives judges and staff a place to practice prompt-writing, ask follow-up questions, and give feedback, all while staying inside a controlled environment and within the bounds of most court AI policies.

Looking ahead, the panelists argued that the most durable “future skills” may not be specific technical proficiencies but human capabilities, such as adaptability, creativity, critical thinking, and change leadership. In fact, HR leaders across industries largely agree they cannot predict exactly which tools or skill sets will dominate in a few years, Griffin said, and instead, courts should focus on helping managers to craft better prompts, interpret outputs critically, and lead their teams through repeated waves of technological change.

Leonard similarly urged legal organizations to move beyond basic, adoption use cases — such as document summarization and email refinement — and start exploring more creative, transformative uses that could redesign legal services and court systems to be more responsive to the public.

Finally, the webinar stressed that AI literacy cannot be a one-and-done initiative. The AI Readiness Guide, published by NCSC, encourages courts to treat AI projects as catalysts for revisiting their overall literacy strategy and HR practices.


You can find out more about the work that NCSC is doing to improve courts here

More insights