Skip to content
AI for Justice

Scaling Justice: Unauthorized practice of law and the risk of AI over-regulation

Maya Markovich  Executive Director / Justice Technology Association

Sonja Ebron  Chief Executive Officer / Courtroom5

· 7 minute read

Maya Markovich  Executive Director / Justice Technology Association

Sonja Ebron  Chief Executive Officer / Courtroom5

· 7 minute read

AI is reshaping legal help, yet uncertainty around unauthorized practice rules is chilling tools that could close the justice gap and allow responsible AI use to expand access to justice

Key insights:

      • Are regulations choking innovation? — Current regulatory efforts may be stifling innovation in AI-driven legal solutions, exacerbating the access to justice crisis and prioritizing lawyer business model protection over consumer needs.

      • Some safeguards already in place — Existing consumer protection laws and product liability laws already provide robust safeguards against potential AI-related harm, making it unnecessary to impose additional restrictive policies on AI-driven legal services.

      • A balanced regulatory approach is best — An approach that encourages responsible innovation, prioritizes consumer protection, and fosters a data-driven mindset can best unlock the transformative potential of AI in addressing critical gaps in access to justice.


As AI-driven legal solutions gain traction, calls for regulation have grown apace. Some are thoughtful, others ill-informed or protectionist, and many focus on the issue of unauthorized practice of law (UPL). While protecting the public is crucial, shielding the legal profession from competition is not. A large majority (92%) of low-income people currently receive no or insufficient legal assistance; and the ongoing uncertainty in the legal AI and UPL regulatory landscape is chilling innovation that could support them.

The legal profession has always struggled to provide affordable, accessible services even as they simultaneously attempt to block those working ethically to bridge the gap with technology. When done right, legal industry regulation should balance protection with progress to avoid stifling innovation and exacerbating the access to justice crisis.

Consumer protection laws already provide robust safeguards against potential AI-related harms. Existing product liability laws and enforcement actions by state attorneys general ensure that consumers have recourse if AI legal tools cause harm. Despite these safeguards, concerns about unregulated AI filling the gaps in legal services persist.

It is time to upend the calculus of consumer harm and examine the motives of regulation. Rather than forcing tech-based legal services to prove they cause no harm in order to avoid changes of UPL, regulators should be required to justify, with data, that legal technology companies cause harm and whether any ruling will constrain supply in the face of a catastrophic lack of access to justice.

Uneven regulatory efforts raise questions

Current regulatory efforts tend to focus on companies that directly serve legal consumers, while leaving broader AI models largely unchecked. This raises uncomfortable questions: Are we truly protecting the public, or merely constraining competition and thereby reinforcing barriers to innovation in the process?

“If UPL’s purpose is protecting the public from nonlawyers giving legal advice — and if regulators define legal advice as applying law to facts — how many legal questions are asked of these Big Tech tools every day?” asks Damien Riehl, a veteran lawyer and innovator. “And if we won’t go after Big Tech, will regulators prosecute Small Legal Tech, which in turn utilizes Big Tech tools? If Big Tech isn’t violating UPL, then neither is Small Tech [by using Big Tech’s tools].”

Efforts to regulate the use of AI-based legal services are, de facto, another path to market constraint. Any attempt to regulate AI should be rooted in actual consumer experience. Justice tech companies, by definition, pursue mission-driven work to benefit consumers, but if an AI-driven tool causes harm, it should certainly be investigated and regulated. State bar associations are not waiting for harm to occur before considering regulating AI-driven legal help — and we must wonder why.

The risks of premature regulation

We must enable, not obstruct, AI-driven legal solutions and ensure that innovation remains a driving force in modernizing legal services. If restrictive policies make it difficult to develop cost-effective legal solutions, fewer consumers —particularly those with limited resources — will have access to legal assistance.

AI is developing far too quickly for a slower regulatory trajectory to keep up — any contemplated regulation would be evaluating last year’s technology, which is at best half as good as the latest iterations. Regulating AI-driven legal services now is akin to prior restraint, as when published or broadcast material is anticipated to cause problems in the future and is suppressed or prohibited before it can be released. However, this approach does not apply to new technology; we already can look for evidence of harm in product liability.

By prioritizing consumers rather than lawyer business model protection, AI-enabled legal support would be monitored for potential harm with data collected and analyzed to bring to light any issues. That way, regulations could be built around that defined, data-backed harm. For instance, we might require certification protocols for privacy or security if those issues prove problematic.

Forward-thinking states are going further

In July, the Institute for the Advancement of the American Legal System (IAALS) released a new report, Regulating AI in the Delivery of Consumer Services: Unlocking Legal Regulation, which advocated for a phased approach to regulation, beginning with experimentation, education, and consumer protection, while gathering and evaluating data. Later phases could involve potential regulation based on what is learned. In this way, innovation is encouraged while consumer needs and public trust remain paramount.

Also this year, Colorado cut the proverbial Gordian Knot by releasing a public non-prosecution policy — consistent with existing analysis of UPL complaints in the state — for AI tools focused on improving access to justice. Guiding principles include ensuring consumers have clarity about the services they receive and their limits, educating consumers on the risks inherent in relying on advice from non-lawyer sources, and including a lawyer in the loop. Utah, Washington, and Minnesota all have considered similar policies. And IAALS now is collaborating with Duke University’s Center on Law & Tech to create a toolkit and templates to make it easier for other states to adopt UPL non-prosecution or similar policies.

Yet, some regulators seek the opposite, looking to define the exact types of business activity that will lead to UPL prosecution. While this framework is more likely to become obsolete more quickly, it serves a similar purpose: providing clear guardrails that allow innovation to flourish, while protecting consumers by clearly indicating the limitations of the software. The Texas legislature amended its UPL statute to specifically exclude tech products from UPL enforcement, provided they are accompanied by adequate disclosures that they are not a substitute for the advice of a licensed lawyer. Such policies are essential, and they can encourage those entrepreneurs aiming to ameliorate the justice gap.

What’s next?

The legal and justice tech industries should aim for a regulatory framework that encourages responsible, iterative innovation — and participants should take some proactive steps, including: i) justice tech companies should participate in the discussion and share their business- and mission-focused perspectives to help shape any new regulations; and ii) regulators with internal non-prosecution policies should consider making them public to encourage entrepreneurs in their state.

These approaches would enable positive change for state residents, support overburdened legal aid organizations and courts, and foster a flourishing tech ecosystem aimed at serving unrepresented and under-represented parties.

The legal profession has not been able to ensure justice for all, making it even harder for low-income and unrepresented parties to find the help they need. Now, AI-driven legal service providers are moving forward on addressing critical gaps in access to justice.

With a measured and equitable approach to regulation that neither ignores AI’s risks nor overlooks its transformative potential, the legal industry and regulators must keep pace with today’s technology — and such efforts should not obstruct those legal providers who can bring the law closer to that ideal and help close the justice gap.


You can learn more about the challenges faced by justice tech providers here

More insights