Skip to content
Best Practices in Courts & Administration

Between hype and fear: Why I have not issued a standing order on AI

The Hon. Maritza Dominguez Braswell  U.S. Magistrate Judge / District of Colorado

· 7 minute read

The Hon. Maritza Dominguez Braswell  U.S. Magistrate Judge / District of Colorado

· 7 minute read

The author argues that restrictive or mandatory AI orders in the legal system could have unintended consequences. She has taken a different approach to AI in legal filings, asking lawyers to engage, balance caution and innovation, and support — rather than replace — human judgment in law

Key insights:

      • The legal system should avoid both overhyping and over-fearing AI — Instead, adopting a balanced approach that emphasizes careful, deliberate engagement and responsible experimentation.

      • Mandatory AI disclosure or certification orders do not necessarily improve the reliability of legal filings — In addition, they run the risk of creating confusion, false assurance, and additional hurdles, especially for smaller law firms and self-represented litigants.

      • Rather than imposing a restrictive order, the author issued guidance — This guidance is designed to promote responsible AI use, focusing on verification and accountability while allowing space for lawyers to engage with AI as a tool for augmentation rather than automation.


The legal system is being pulled in two directions when it comes to AI: On one side is overconfidence, the idea that AI will quickly solve legal work by automating it; and on the other side, fear — the feeling that AI is so risky that the safest response is to restrict it, discourage its use, or fence it off with new rules.

Both reactions are understandable; but neither is getting us where we need to go.

In a recent interview, Erik Brynjolfsson, the Director of the Stanford Digital Economy Lab and lead voice for the Stanford Institute for Human-Centered AI, makes two simple but important points that explain why both hype and too much skepticism miss the mark.

First, those caught up in the hype are moving too quickly toward automation. Tools work best when they support people, not when they try to stand in for them. Second, skeptics are overreacting to early stumbles. Early failures do not mean AI is a dead end. More often, they mean institutions are still learning how to use it well.

There is a middle ground. It’s not about rushing ahead, and it’s not about slamming the brakes. It’s about careful but deliberate use while testing tools, learning their limits, and moving forward with intention.

That perspective informs my approach.

Standing orders on AI

After well-publicized AI mistakes, it makes sense to look for something concrete that signals seriousness, and disclosure and certification orders do that. They tell the public and the bar that courts are paying attention. However, I don’t think disclosure does the work people hope it does, and I worry it pulls attention away from things that matter much more. I’ll explain.

Disclosure does not make filings more reliableKnowing whether a lawyer used AI to help draft a filing does not tell me whether that filing is accurate, complete, or well supported. Long before modern AI entered the picture, courts had to guard against overstated arguments, bad citations, and unsupported claims. Knowing which tools were used to prepare a filing did not make those filings or the tools more reliable then, and it does not make them more reliable now.

Certifications and disclosures may offer false assurance — The spotlight is on hallucinations (AI-generated fake cases or citations), but courts already have ways to identify and address those problems. The more concerning risks are quieter: bias, AI over-reliance, or subtle framing that influences how an argument is presented. I’m also extremely concerned about deepfakes, which are much more difficult to detect. Disclosure about AI use in briefs does not address any of those risks, and it may distract us from the far bigger risks. It also creates a false sense that a filing is more careful or reliable than it actually may be.

Additional orders can add confusion — AI standing orders are growing in number, and they take very different approaches. Some require disclosure, some certifications, some limits, some are outright bans. Definitions vary or are missing altogether. Lawyers can comply, but it takes time and careful reading, and as noted already, it doesn’t necessarily improve the quality of what reaches the court.

Early in my time as a United States Magistrate Judge, I made it a point to seek feedback from the legal community about what made legal practice more difficult than it needed to be. One theme came up repeatedly — keeping track of multiple, overlapping judicial practice standards was tough. In response, I worked with my colleagues to consolidate standards into a single, uniform set. I see a similar risk emerging with AI standing orders. Well-intentioned but divergent approaches can splinter practices and create new hurdles, particularly for smaller law firms and self-represented litigants. I don’t want to issue a standing order that adds another layer of complexity without meaningfully improving the quality of what comes before me.

The rules already cover the landscape — I already have tools to deal with inaccurate or misleading filings. Lawyers are responsible for the work they submit, and Rule 11 doesn’t stop working because AI was involved. If something is wrong or misleading, I already have ways to address it.

Certification or disclosure could be misinterpreted as discouraging AI use, and I worry about who gets left out — When new tools are treated as suspect or off-limits, those with the most resources find ways to keep moving forward. However, smaller firms and individual litigants fall further behind. A system that chills responsible experimentation risks widening access gaps instead of narrowing them. In my view, everyone should be exploring ways to, as Brynjolfsson says, “augment” themselves. So long as we remain accountable for the result, augmentation is how lawyers, judges, and other professionals will retain their value in a legal system that is becoming more AI-integrated every day.

Rather than issue a standing order that limits AI use or requires certification or disclosure, I offer guidance focused on basics: Check your work, protect confidential information, and take responsibility for what you submit. I published this guidance for those interested in my perspective, but it is deliberately not an order, so as to avoid the concerns described above.

We shouldn’t fear AI — we should shape it

Some warn that AI is coming for the legal profession; however, I’m more optimistic (and perhaps more idealistic).

In my view, the justice system depends on human judgment. Empathy, discretion, humility, moral reasoning, and uncertainty are not bugs in the system, rather they’re an essential part of the program. If we want to preserve human judgment in the age of AI, we must be involved in how AI is used. And we can’t do that from a distance. We have to engage with AI, understand its limits, and model responsible use.

Used carefully, AI can help judges:

      • organize large records,
      • identify gaps or inconsistencies,
      • spot issues that need a closer look,
      • identify and locate key information,
      • translate legal jargon to help self-represented litigants better understand what is being asked of them, and
      • reduce administrative drag so more of a judge’s time is spent on decision-making.

This kind of use does not replace us; rather, it supports us. It augments us so we do our work as well as we can, help as many people as possible, and still keep human judgment at the center of everything.

Why this moment matters

The AI conversation in law will remain noisy for a while. Some legal professionals will promise too much. Others will warn against everything. The better path is in the middle — engage, test, verify, and adjust.

As the Newsweek article suggests, this is a watershed moment. Not because AI will decide the future of our institutions, but because we will. The choices we make now will shape what AI does in the justice system, and just as importantly, what it does not do.

We should not be afraid of AI. We should help shape how it is used so it strengthens, rather than replaces, the human judgment at the heart of the legal justice system.


You can find out more about how courts and the legal system are managing AI here