AI and digital tools are transforming access to justice for self-represented litigants, shifting the challenge from lack of legal help to questions of accuracy, governance, and effective support
Key takeaways:
-
-
-
From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support. AI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.
-
The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution. While AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.
-
One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.
-
-
Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.
This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.
Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”
And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”
Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.
Accuracy, error & tradeoffs
The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.
The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.
These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.
Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.
Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.
Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.
Knowledge bases are infrastructure, but more is needed
Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.
Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.
Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.
A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.
Abundance and the limits of technology
Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.
Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.
Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.
Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.
You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here