Skip to content
AI & Future Technologies

Legalweek 2024: Current US AI regulation means adopting a strategic — and communicative — approach

Zach Warren  Manager for Enterprise Content for Technology & Innovation / Thomson Reuters Institute

· 6 minute read

Zach Warren  Manager for Enterprise Content for Technology & Innovation / Thomson Reuters Institute

· 6 minute read

On both the federal and state level, the focus on regulating AI has begun, and although many of these regulations are in the early phases, Legalweek panelists say organizations should begin sharpening their usage, privacy, and communication policies now

NEW YORK — At this point, it’s undeniable that generative artificial intelligence (Gen AI) will have some sort of impact on the legal profession. After all, according to the Thomson Reuters Future of Professionals report, 70% of legal industry respondents believe the introduction of AI or Gen AI into their profession will have a transformative or high impact on the industry’s future within the next five years — more than any other trend.

However, particularly in legal, there remains two major sticking points to widespread adoption: ethics and regulation. While the technology itself is rapidly emerging, Gen AI is not likely to see widespread adoption in the legal industry if it can’t be trusted, and many legal practitioners are waiting for additional guidance before using Gen AI to tackle more widespread tasks.

According to a panel, The Ethical and Regulatory Impacts of Using Your Data to Train AI, at the recent Legalweek conference, that guidance is coming in the United States — albeit slowly, and in parts. However, the guidance that is available now is enough to get started on Gen AI risk mitigation planning, panelists said.

US Gen AI regulation today

Across most US jurisdictions, Gen AI regulation is extremely active but is still in the early stages. For example, the panelists noted that many US states are beginning to form councils and task forces to look into AI. “We’re now seeing this over-arching focus on the risks… but that trickles down into the state legislatures themselves,” said panelist Garylene Javier, a Privacy & Cybersecurity Associate at Crowell & Moring.

In particular, a lot of the AI focus has been around the privacy rights of individuals, particularly around consumer protection and the right to opt out of AI systems. Javier explained that all states (except Utah) that have adopted privacy laws have done so with opt-out provisions as part of the legislation. “What we’re seeing is a shift of all of these different privacy laws with that particular focus, particularly when it comes to automated decision-making,” she explained.


Anyone in this space needs to be thinking through the scenarios.


On a federal level, President Biden’s 2023 executive order on AI remains the most relevant guidance, particularly as the White House has announced in early February that all executive agencies have completed the 90-day actions tasked by the order. Panelist Ignatius Grande, Director at Berkeley Research Group, cited the Federal Trade Commission (FTC) as one regulatory agency to watch, explaining it has been among the most proactive in the AI space.

“They’re coming out and saying, there are copyright issues with LLMs [large language models] and AI,” Grande said.

Of course, the upcoming 2024 presidential election could cause upheaval for AI regulation. As for exactly how, the panel did not speculate — only to say that it could have wide-reaching ramifications. Jason Winmill, Chair of legal industry organization Buying Legal Council, observed that the previous two presidential administrations viewed technology regulation very differently, and “anyone in this space needs to be thinking through the scenarios.”

Winmill drew an analogy to a pool table, where the pool balls are not independent, but interact with one another in seemingly endless varieties. What matters, however, is who’s lining up the shot. “We don’t know what that pool table is going to look like in 2 to 3 years.”

The legal industry’s response

Given the potentially transformative nature of Gen AI, it’s reasonable to expect that it will see widespread usage within the legal industry within the next 5 to 10 years; but the panel noted that in order to do Gen AI right, corporate legal departments and law firms will need to do their due diligence.

Grande told the story of Samsung, which ran into a problem last year with employees using ChatGPT. One employee uploaded source code to ChatGPT that was viewed externally, while another employee put a confidential recording into the tool to create a summary. Samsung ended up banning ChatGPT outright until it could put more controlled Gen AI systems in place.

There’s a lesson for the legal industry no matter where you sit, Grande added. “As far as what law firms can be doing, part one is understanding and giving guidance to their employees… and have in place acceptable ways of [using] generative AI. There are a lot of questions that you need to ask, and if you’re someone who’s not experienced in the area, it’s really hard to know what questions to ask.”

Winmill agreed, noting that from the in-house legal perspective, AI use needs to be ethical but also should be “driven by real business needs.” Answering the relevant AI questions starts with getting a number of different parties in the room: legal, for certain, but also procurement, IT experts, AI experts, and business advisors. Concerning the ethics risks, he added: “The space is just moving way too fast, and you’re going to have to have outside counsel or advisors who can advise on a timely basis.”


There are a lot of questions that you need to ask, and if you’re someone who’s not experienced in the area, it’s really hard to know what questions to ask.


Of course, that doesn’t mean throwing as many people as possible at the AI problem. The key is to be strategic, Winmill said, and bigger teams aren’t always better. “The size of the table is growing, it’s growing faster, and that is also a concern,” he explained, noting that academic literature shows larger teams can have communication issues. “We’re adding more resources in, but we’re also going to be encountering more problems.”

Communication is also a key when outside forces are involved, particularly when firms are dealing with outside regulators. Corporate legal departments and law firms need to demonstrate first that they have a plan, Javier said, and then demonstrate that they are doing everything they can to stick to that plan. This has been a particular point of emphasis when analyzing AI for potential bias, which has increasingly been the subject of US state legislation and lawsuits.

Organizations should be “making sure from an ethical standpoint, you’re demonstrating across the entire process [that] you did as an organization or a law firm take the time to step through and see how it affects certain demographics or customers,” Javier said, adding that as hard as legal departments or law firms may try, any Gen AI policy is “not going to 100%” in terms of catching every problem.

“Something will happen tomorrow where it’s going to be totally deficient,” she said. “The best thing we can do is try — try really hard.”

More insights