The US legal system, by design, is careful and deliberative, which means there can be a natural lag between emerging AI issues and the guardrails that contain them
Key insights:
-
-
-
The rapid pace of AI development is testing the limits of a legal system built for process and deliberation — The shortcomings create uncertainty and challenges for both lawmakers and innovators.
-
Specialized tribunals might offer a solution — They could provide a forum for faster, more precise guidance and directives, while also ensuring decisions are grounded in expertise and applied to concrete facts.
-
A balanced approach is needed to regulate AI — If AI is constrained too much, it risks stifling innovation and reducing access to justice; however, if AI is not constrained enough, it could create serious risks.
-
-
AI is moving fast — faster than our legal system is built to move. Our courts and legislatures are designed to be contemplative, cautious, and process driven. When technology moves at lightning speed, this deliberative process creates a gap between the questions being raised and the answers we have available. Lawmakers are left to catch up and patch up, and innovators are discouraged by the legal uncertainty.
To bridge that gap, we have to think outside the box. If we don’t, AI-related disputes could pile up. Indeed, some are already in court. And claims of algorithmic bias, AI-generated harm to vulnerable individuals, and discriminatory outcomes from automated systems, are growing. Meanwhile, courts and legislatures are working at their necessary pace—and it’s not fast. This means AI systems will continue to be developed and deployed without clear rules or remedies.
This isn’t the first time technology has outpaced the law. The industrial revolution forced lawmakers to confront unprecedented questions of workplace safety and labor rights. Then-existing laws, systems, and structures were ill-suited to address those issues, and we were forced to adapt.
The same can be said about the AI revolution. Our laws, systems, and structures will have to adapt. If they don’t, we may face serious risks that may one day become existential risks. On the other hand, if we do too much to constrain AI, we risk slowing biomedical advances, missing educational opportunities, reducing access to justice, compromising national security, and limiting the prosperity that might flow from these technologies. The balance is delicate, and the consequences are profound.
Learning from established models
In the wake of the industrial revolution, Congress created the Occupational Safety and Health Review Commission (OSHRC) as part of the Occupational Safety and Health Act of 1970. The OSHRC, an Article I independent federal administrative agency court, adjudicates disputes between the U.S. Secretary of Labor and employers, when the Occupational Safety and Health Administration (OSHA) issues citations on behalf of the Secretary for violations of the Act.
The industrial revolution forced lawmakers to confront unprecedented questions of workplace safety and labor rights. Then-existing laws, systems, and structures were ill-suited to address those issues, and we were forced to adapt.
The same can be said about the AI revolution.
Under this structure, federal administrative law judges (ALJs) issue decisions utilizing a structured system that ensures fairness and due process. The ALJs do not have regulatory or enforcement authority — that rests with the Secretary of Labor and OSHA — but their decisions have significant impact. They interpret the law, resolve disputes, and guide employers, employees, and OSHA in their understanding and application of the law.
There may be lessons to draw from this and other models such as the U.S. Tax Court and the Court of Appeals for Veterans Claims, because specialized tribunals that respond to emerging needs have proven effective.
The need for specialization & speed
What sets AI apart from other challenges is the combination of speed and reach.
It was less than three years ago, in November 2022, that ChatGPT captured public attention. In just a few months, ChatGPT reached more than 100 million monthly active users, becoming the fastest-growing consumer application in history. Within a year, companies began harnessing AI for scientific and medical breakthroughs, and today over a billion people use AI chatbots on a regular basis. The conversation has also shifted from basic text-based models to fully agentic AI systems. Some even warn that seemingly conscious AI is on the horizon, if we’re not careful.
This trajectory underscores a simple truth: even if foundational models never achieve the kind of artificial general intelligence that some proponents predict, the systems we have today are already very powerful, adaptable, and certain to be leveraged in ways their developers may never have intended. Moreover, the speed of AI development means we only have a short window within which to build the right guardrails.
Yet, our current systems make it nearly impossible to put these guardrails in place at the pace of innovation. There is no comprehensive federal legislation addressing AI, and efforts in that direction have met resistance out of concern that broad rules could stifle innovation. In the absence of a unified framework, states have begun to act on their own, creating a patchwork of laws that address some issues while leaving significant gaps in others. At the same time, individual disputes that could shape the legal landscape move slowly through our backlogged courts, where judges — generalists by design — must divide their attention among a wide range of cases and cannot realistically conduct detailed inquiries into every emerging technology that comes before us.
Beyond our current structures
A tribunal with the right expertise and built for efficiency might be a useful tool for building the right guardrails.
Certainly not every AI-related dispute requires AI expertise. However, when a dispute concerns the guardrails on AI development, deployment, and use, adjudicators would benefit from learning how these systems work, where they fail, and the societal risks they pose. Training on ethical frameworks, human-centered design, the evolving legal and regulatory landscape, and the dynamics of AI innovation could help adjudicators appreciate both the benefits and risks of imposing certain limitations. Additionally, with some fluency, adjudicators would be better positioned to ask the right questions, recognize when expert testimony is needed, and issue decisions that are not only legally sound, but technologically informed.
Even if foundational models never achieve the kind of artificial general intelligence that some proponents predict, the systems we have today are already very powerful, adaptable, and certain to be leveraged in ways their developers may never have intended.
To be sure, designing jurisdiction for a specialized tribunal would require great care. The sheer breadth of the AI revolution may make a federal agency adjudication structure — like the OSHRC — inadequate. A specialized tribunal could also be built on a consent model, however, which would allow it to handle private disputes with mutual agreement from both parties for expert and speedy resolution. Or perhaps a specialized tribunal could serve as a resource for existing courts, who could certify technical questions for non-binding resolution, allowing them to tap into the tribunal’s expertise without surrendering their authority to decide cases.
There is certainly much to consider, including the potential drawbacks of a specialized tribunal. But this is not a call for a set system or framework. Instead, it is an invitation to think beyond the current limits of our system and ask, “What will it take for our legal system and institutions to keep up with AI advances? And how can we mitigate major risks, while continuing to promote and support innovation?”
If we do not explore solutions beyond the limits of our current system, we risk: i) delays that allow unsafe AI practices to advance unchecked; ii) fragmentation in the absence of comprehensive legislation; and iii) overbroad, one-size-fits-all regulations that could inhibit critical innovation.
Looking for balance
A specialized AI tribunal might offer something for everyone. For those who worry that regulation is too sparse or too slow, it would provide a forum for faster, more precise guardrails and guidance. For those concerned that sweeping regulations would limit innovation or miss the mark, a specialized tribunal could deliver narrowly tailored decisions, rather than broad, over-inclusive rules.
A specialized tribunal might also spare us the impossible task of trying to legislate for every hypothetical future problem. Instead, issues could be resolved as they come — with decisions grounded in expertise and applied to concrete facts.
No framework is perfect, but when the pace of change is unprecedented, the competing interests critical, and the consequences profound, we need fresh ideas that bring all concerns to the table.
Judge Braswell wishes to thank Judge Patrick Augustine for his OSHRC insights
You can find more articles by this author here