Skip to content
AI & Future Technologies

10 things a GC thinks about AI and the law

Jerald S. Howe, Jr.  Former Executive Vice President, General Counsel & Corporate Secretary / Leidos Holdings, Inc.

· 8 minute read

Jerald S. Howe, Jr.  Former Executive Vice President, General Counsel & Corporate Secretary / Leidos Holdings, Inc.

· 8 minute read

A former corporate general counsel offers thoughts and speculations about how AI and the law may evolve together

These days one can hardly open a legal publication without seeing another article about artificial intelligence (AI), so I write with hesitation. For context, I approach the subject of AI as the recently retired general counsel of a Fortune 500 technology-enabled company — spending my days and many nights at the intersection of corporate governance, law, regulation, and the advance of technology.

Along with almost all of Corporate America, we analyze the development of AI through many perspectives, ranging from enablement of our own and our client work, to potential disruption of our various lines of business. Meanwhile, we are doing our best to disrupt others for competitive advantage through the application of AI, in fields as disparate as cybersecurity, healthcare management, and autonomous defense platforms.

I am far from declaring myself an expert, but last year I took a semi-deep dive into the field of AI and the Law by co-teaching a course by that name offered within our company, with almost all of the students being our advanced AI engineers. The course was not called simply AI Law because there is so little of it. Rather, we explored how AI has interacted, and may be expected to interact, with established fields of law.

A GC’s thoughts about AI and the law

In delivering the course, I learned much more than I taught. Here are my top 10 take-aways about AI and the law:

    1. The exploration of how AI will shape the law and vice versa is more interesting than how AI can be harnessed in the day-to-day practice of law, whether in private practice or within in-house legal departments. That too is an important subject on which much work is being done, including by our company; but in my estimation, the broader implications of AI for how society regulates itself through law and legal process are more compelling.
    2. AI is not always correct, and sometimes even goes rogue. By now, all lawyers with even a passing interest in AI will have heard of the case of the lawyer who turned over the briefing of a motion to ChatGPT — a generative AI (GenAI) model — only to find out too late that the AI had hallucinated case citations. Naturally, sanctions followed. However, using GenAI can result in many more mundane errors happening, too. As part of preparation for the first session of our course, we asked ChatGPT to run some five-figure mathematical additions, and it got the wrong answers (oddly off by 100 both times). One might respond, but ChatGPT is a large language model and not a math engine, and that would be right, but that only underscores the larger point. You have to select the right AI engine, and prompt it correctly, to get good results. There is more to this dynamic of course, as AI can be designed to self-correct over time, but the fundamental points remain. For AI to get along well with the law, it must first work well.
    3. To work, AI depends on several factors coming together, in no particular order: i) big and fairly clean data sets; ii) advanced computing; iii) energy to power that computing; iv) meaningful mission or use-cases, and v) algorithms applied to those missions or use cases, to both train and kick off the AI. The history of AI has seen bottlenecks on those five areas, and more blockers may reasonably be expected.
    4. The rapid advance of AI is at odds with mitigation of climate change. The volume of computing necessary to run big AI applications is astounding, and it takes a great deal of energy to power that amount of computing. One can estimate that ChatGPT emits about the same quantity of green-house gasses as a small city, every day. All things equal, that means more global warming. This trade-off will need to be managed in the context of environmental laws and regulations.
    5. In the oft-quoted words of Oliver Wendell Holmes: “The life of the law has not been logic; it has been experience.” In the Anglo-American legal tradition, the common law approach has predominated, even with the scaling of legislation in the 20th As politicians clamor to get into the ring and box with AI, cooler heads might advise letting the experience percolate a while via the common law method — at least in instances in which there are no clear, consensus legal or regulatory solutions.
    6. In the common law tradition, established professional and industry standards often form the basis for legal line-drawing about extent of liability. Medical malpractice and technology failures leading to car accidents are two obvious examples. For this reason, we should pay attention to ethical standards — well short of law or regulation — that are emerging as guardrails around the use of AI. In my world, both the defense establishment and the intelligence community have promulgated ethical AI Such standards should be expected to evolve into more law-like rules over time, through a combination of case precedent and legislation.
    7. Driving many fears about AI is the undeniable fact that much of AI is controlled by the largest tech companies, around which many antitrust concerns swirl. Those legitimate antitrust issues should not be conflated or confused with legal issues about AI itself. There is some overlap between the two sets of issues, of course, but they differ fundamentally.
    8. Justifiable concerns about bias in AI lead to fears of unlawful discrimination. AI bias can be inherited from ordinary human prejudice, and sometimes inadvertently trained into AI algorithms. Both of these issues are important to address. I would say that deployment of AI should be deferred, or at the least very carefully supervised (if this is possible) in situations in which: i) there is a prospect of invidious discrimination, for example on the basis of race, gender, or religion; and/or ii) fundamental rights are at stake, for example in criminal trials and sentencing. While there are many use cases for AI, these special cases should be reserved for last, when society has gained more confidence in the accuracy, reliability, and consistency of AI. I say this knowing that ordinary human decision-making is rife with unconscious (and some conscious) bias, so the status quo is far from optimal, and AI may actually be able to tame human bias. Thus, the safest course of action in these special cases is probably deploying a combination of human and AI attention.
    9. The concept of AI legal neutrality — as posited by Ryan Abbott in his 2020 book The Reasonable Robot: Artificial Intelligence and the Law — is a useful one. We should not tilt the playing field in favor of, or against AI, relative to the current ways of doing things (which by the way may already be quite automated, just without AI enablement). AI neutrality is not the only principle to be applied, but it is a common-sense tool for achieving balance. Abbott presents practical examples in his book, ranging from taxation to tort law and beyond.
    10. Too often our thinking tends toward cleanly separating what is AI from what is human. Yet in all practicality, combining humans and AI will alleviate the risks of sole reliance on one or the other. The law needs to prepare for those boundaries to become grayer and grayer over time. Because human-AI teams will likely outperform humans or AI alone on a broad range of tasks for the foreseeable future, legal regimes assuming separation are bound to fail.

Net-net, should we be pessimistic and fearful about the acceleration of AI, or optimistic and hopeful? I posed that question in a lightening round at the end of our course on AI and the law. The consensus was about 7 on a scale of 1 to 10, tending toward the optimistic/hopeful end of the spectrum, with a spread from roughly 4 to 9. Ironically, more of the underlying reasoning was about people than AI itself. The pessimists were more concerned that bad actors would get their hands on AI and use it for malevolent purposes. The optimists were more assured that smart, public-minded people would come together and mitigate the risks of AI while seizing the opportunities.

On balance, I find myself at about that 7 spot. Yet on more than almost any subject I can think of, I reserve the right to modify my current assessment as the interplay of AI and the law evolves.

More insights