Skip to content
Legal Technology

The AI Law Professor: When you need AI governance that just works

Tom Martin  Author & Professor

· 8 minute read

Tom Martin  Author & Professor

· 8 minute read

As AI tools become common workplace fixtures, legal teams need governance frameworks that balance innovation with responsibility, give clear guidance, and actually get used rather than just collecting dust

Key points:

      • Governance must be practical — Complex policies that lawyers ignore are worse than no policies at all. Governance leaders should focus on daily workflows, not academic perfection. Yet, the most elegant policy fails if it cannot adapt to the pace of AI tool evolution.

      • The four pillars still matter — Transparency, autonomy, reliability, and visibility provide a tested framework for AI governance that can be scaled from solo practitioners to AmLaw 100 firms.

      • Risk stratification drives adoption — Not every AI use case deserves the same scrutiny. Smart governance distinguishes between drafting a motion and scheduling a meeting.


Welcome back to my The AI Law Professor column. Last month, I unpacked GPT-5’s rollout and argued for maintaining human control even as AI systems become more autonomous. This month, I am delivering on my promise to outline governance that actually works, governance that lawyers will use rather than circumvent.

Good governance feels invisible until something goes wrong. In legal practice, we already have this — we use conflict check systems, document retention schedules, and billing protocols that capture time. AI governance should work the same way: structured enough to prevent problems, yet flexible enough to evolve with the technology, and practical enough that busy lawyers actually follow it.

Why most AI policies fail

The AI governance documents I see in practice fall into two categories: the overwrought and the undercooked. The overwrought policies read like academic treatises on algorithmic fairness — they’re impressive in scope, but impossible to implement. The undercooked policies amount to “don’t put client data in ChatGPT” and a prayer that nothing bad happens… or worse, such as absolute bans on generative AI (GenAI).

However, both approaches miss the mark because they treat AI as either a silver bullet or a loaded gun, when the reality is somewhere in between and much more mundane. AI tools are productivity enhancers with specific strengths, specific blind spots, and the same change management challenges as any other technology adoption.

The practical problem is that lawyers need guidance on Tuesday afternoon when the brief is due Wednesday morning. Abstract principles about algorithmic bias do not help; while detailed workflows that account for real deadlines and actual capabilities do.

Building on the four pillars

In previous columns, I have argued for four deployment principles: transparency, autonomy, reliability, and visibility. These are not just theoretical constructs, rather they are the foundation of any governance framework that legal teams can actually implement successfully.

In the context of these four pillars, the most practical governance frameworks start with risk classification. Indeed, a three-tier system works well for most legal teams and includes:

High-risk uses — These include client-facing documents, substantive legal analysis, court filings, and anything involving privileged communications. You don’t want to get sanctioned for hallucinations! These tasks require mandatory human review, detailed documentation, and oftentimes, obtaining client disclosure.

Medium-risk uses — These usually cover internal research, document review, draft preparation, and administrative analysis. These tasks benefit from AI assistance but need quality checkpoints and clear limitations on autonomy.

Low-risk uses — These more mundane uses encompass scheduling, formatting, basic summarization, and routine administrative tasks. These can run with minimal oversight, although they still require basic security controls.

This framework lets legal teams deploy AI tools confidently in low-risk contexts while maintaining appropriate caution for high-stakes work. It also creates a clear path for expanding AI use as tools improve and teams gain experience. Team leaders can also choose what roles have access to each tier.

Change management as governance

AI tools evolve faster than traditional legal technology, and GPT-5’s rollout demonstrated how vendor decisions can disrupt established workflows overnight. Effective governance must account for this pace of change. It’s inconvenient, but it is our new reality.

Using specific versions of AI models (for example, when using the OpenAI’s API, you can specify ‘gpt-5-2025-08-07’ versus ‘gpt-5,’ which refers to the latest version of the model). This provides stability for mission-critical work. When you rely on specific AI behavior for document review or contract analysis, lock in the model version that delivers consistent results. Do not let automatic updates become uncontrolled experiments with client work.


For further help getting started, see AI Governance Policy Checklist for Legal Teams here


Testing protocols create confidence in AI upgrades. Before deploying a new model or tool, run it through the same tasks you use for daily work and make it your AI model test set. Compare accuracy, consistency, and completeness against your current baseline. Determine and record what improves and what degrades.

Rollback procedures provide insurance against AI failures. When a new model produces inferior results, you need quick paths back to last-known good configurations. This may require maintaining access to legacy models or alternative tools.

Making governance stick

Even the best governance framework fails if lawyers do not follow it. Implementation requires attention to three practical realities:

      1. Integration with existing habits — This means building AI governance into systems lawyers already use. If your conflicts-check system can track AI tool usage, use it. If your document management system can flag AI-assisted work, configure it. Do not create parallel processes that compete with established habits.
      2. Training that focuses on competence — Such training teaches lawyers how to use AI tools effectively, not just safely. Include prompt engineering best practices, output validation techniques, and quality assessment skills. Lawyers who understand AI capabilities are more likely to respect AI limitations.
      3. Policies that evolve — Anticipate change rather than resisting it. Build quarterly review cycles into your governance framework and establish triggers for policy updates when new tools emerge or existing tools change capabilities. Plan for the next disruption rather than just responding to the last one.

The firms that get AI governance right will not just avoid problems, they will deliver better work more efficiently. Governance frameworks that emphasize quality control, appropriate use cases, and continuous improvement create the foundation for sustained AI value.

This requires moving beyond the defensive mindset that treats AI as a compliance burden. Instead, think of governance as the infrastructure that enables confident, reliable AI adoption. Good governance lets lawyers push AI tools harder because they have systems to catch failures and processes to maintain quality.

The legal profession has managed similar technology transitions before. We survived the shift from typewriters to word processors, from law libraries to legal databases, from paper filing to electronic court systems. Each transition required new governance approaches that balanced innovation with professional responsibility.

AI is no different in principle, although it is certainly happening at an exponential pace. The firms that adapt their governance frameworks to the speed of AI evolution, while maintaining the quality standards that clients expect, will lead the profession through this transition.

Implementation starts Monday

Governance policies work best when they start small and expand with experience. Begin with a pilot program that covers specific AI tools and specific use cases. Test the framework with real work under real deadlines. Refine the processes based on what actually happens, not what you think should happen.

Focus on the intersection of high-value tasks and low-risk scenarios. Document review for routine matters, such as contract clause libraries or research summaries for internal use — these are the sweet spots in which AI delivers clear value with manageable risk.

Build feedback loops that capture both successes and failures. I can’t emphasize this enough: Feedback loops are how we learn and improve! When AI tools work well, document why and what worked. When they fail, analyze the failure modes. Then, use this information to refine your risk categories, improve your testing protocols, and adjust your quality controls.

Most importantly, remember that governance is not a destination but rather, a process. The AI tools available next year will differ from those available today, and your governance framework must be robust enough to handle current tools and flexible enough to evolve with future capabilities.

The legal profession has always balanced innovation with responsibility. AI governance is simply the latest chapter in that ongoing story. Those firms that write that chapter thoughtfully, with practical frameworks that evolve with the technology, will shape the future of legal practice.


In my next monthly column, we’ll explore what happens when we ask, “What if AGI?,” and discover how this simple question can reshape our thinking, refocus our priorities, and position us for greater success as lawyers

More insights