Skip to content
AI & Future Technologies

Addressing bias in AI: Surveying the current regulatory and legislative landscape

William Josten  Senior Manager, Enterprise Content - Legal, Thomson Reuters Institute

· 5 minute read

William Josten  Senior Manager, Enterprise Content - Legal, Thomson Reuters Institute

· 5 minute read

A special report by the Thomson Reuters Institute and Duke University School of Law looks at how biases may enter into AI outputs and how that best can be managed

Generative artificial intelligence (AI), including tools like ChatGPT, have become a nearly universal topic of conversation over the past several months. Research from the Thomson Reuters Institute found high levels of awareness of the existence of generative AI technologies and tools, and a generally high level of confidence that these tools can be applied to professional workflows.

There is less confidence, however, around the question of whether these technologies should be applied to workflows in the tax & accounting or legal professions. While nearly half of the respondents to the survey conducted for the reports on the impact of ChatGPT and generative AI in the legal, corporate, and tax markets said that they believed AI should be applied to their work, others were vocally opposed.

Reasons for opposition to using AI ranged from a lack of confidence in how the AI is trained to understand nuance in the subject matter, to outright statements that the use of AI to practice law was prima facie malpractice. These concerns, however, were largely placated when respondents were asked about applying AI to areas of work requiring lesser degrees of professional judgement.

Yet the validity of these worries remains, and they are potentially indicative of some broader concerns surrounding AI usage. In fact, the respondents who stated that they were hesitant to use AI because of concerns over how the AI is trained actually tread into these areas of broader concern.

For any sort of AI, the quality of the output is necessarily dependent on the quality of the input. In other words, if the AI has not been adequately trained and is not kept current on the state of the matters it is being asked to address, the AI will then output poor results.

This need for training carries its own set of risks, specifically that any errors or biases evident in the training will pervade into the AI’s outputs. Indeed, it is this potential for bias in AI outputs that is the focus of the recent white paper from the Thomson Reuters Institute and Duke University School of Law on Addressing Bias in Artificial Intelligence: The Current Regulatory Landscape.

Governments cautiously eye regulation

Many who are concerned about future applications of AI in professional settings and the risks that may accompany its deployment are taking a wait-and-see approach to how governments will regulate AI. And as AI grows in importance, regulation will increasingly affect its development. Not surprisingly, some early efforts at regulation are already underway.

As discussed in the paper, the European Union, the United States federal government, and even some individual American states have begun to work toward meaningful AI regulation, with some efforts expected to be put into effect in the relatively near future.

In the U.S., for example, the Federal Trade Commission and the Equal Employment Opportunity Commission have both recently introduced initiatives aimed at establishing guardrails around AI and its potential impact on the constituencies which those agencies are charged with protecting.

In perhaps one of the more widely anticipated regulatory developments, the E.U. is currently working on what has become known as the AI Act, which in its current proposed form would mandate tiered levels of administrative oversight based on a framework of AI algorithm classification delineated by perceived risk. Many are looking to the E.U.’s AI Act to set the standard for AI regulation due to the E.U.’s demonstrated history of taking the lead on similar topics, such as with their General Data Protection Regulation (GDPR).

In addition to potential regulation by the U.S. federal government and the E.U., the paper also discusses potential sources of bias in AI development. Such biases in AI can be systemic, computational and statistical, or human cognitive, and sometimes may be the result of more than one of these factors simultaneously, as the paper details. In fact, such biases may not be readily perceivable, but their negative impacts — particularly when AI is used to make predictions or recommendations about individuals or groups — are cause for great concern.

As AI continues to proliferate, the chance for negative impact from bias will similarly expand. Today’s legal and tax & accounting professionals are understandably cautious about deploying AI in certain contexts due to the not-yet-fully understood risks AI usage may carry.

Understanding the nature of these risks, including those risks introduced by potential bias in the system, is a key component to mitigating the possibility of negative outcomes from the use of AI. And knowing what steps governments are taking to ensure accountability in AI development and utilization should give users a higher degree of confidence in the technology’s outputs, as well as to help ensure that those AI-based tools still in development will have a higher degree of reliability.


You can download a copy of the new white paper, “Addressing Bias in Artificial Intelligence: The Current Regulatory Landscape”, by Thomson Reuters Institute and Duke Law by filling out the form below:

Gated Form

Name(Required)

More insights