Government agencies need to train their employees on the latest innovative technology, especially Generative AI, so that agencies can continue delivering needed services to citizens
The fast-paced development of generative artificial intelligence (Gen AI) technology and its increasing presence in everyday work routines poses a challenge for government agencies. Without a proactive approach to embrace this technology, there’s a potential for employees to adopt and utilize it without any guidance from agency leaders.
A report from Harvard University’s Law Center points out that Gen AI has the capacity to democratize expertise widely, transitioning us away from a world where law and government services are only understood by subject matter experts. The key question then becomes how government agencies can effectively utilize AI to benefit the public while simultaneously protecting the public’s best interests.
Understanding which tools are in use
At the state, county, and municipal levels, an imperative first step involves identifying the Gen AI tools and systems currently in use. For example, Connecticut state legislation enacted in the summer of 2023 mandates that all generative AI and automated decision-making tools be inventoried by the end of this year. Further, state agency AI policies and an AI Bill of Rights will follow in the state in 2024.
Further, municipalities including San Jose, Calif. and Seattle have implemented regulations to ensure responsible use of algorithmic tools. In San Jose, an algorithmic tool must receive approval from the city’s digital privacy office; while in Seattle, it needs to be approved by the purchasing division. San Jose goes a step further by maintaining a public-facing algorithm register, which provides clear explanations of approved AI tools and their use cases in simple language.
Establishing shared values and stressing accountability around AI use
Government agencies face a delicate balance between regulating innovation and fostering progress in the delivery of government services. The State of Maine, for example, drew a hard line with its decision to establish a six-month pause on all Gen AI use by state agencies earlier this year. A possibly less punitive approach to Gen AI adoption involves establishing a set of common core values to guide the use of this innovative technology.
Pennsylvania Gov. Josh Shapiro issued an executive order this fall, emphasizing 10 fundamental values that should govern the application of this evolving technology in state operations. Broadly, these values seek to ensure that Gen AI use empowers employees, furthers agency mission and equity, but also protects privacy and security.
Municipalities and counties which are implementing AI use policies and guidelines are placing a strong emphasis on holding employees accountable for the accuracy of the content they produce, whether with or without the assistance of Gen AI. As Chief Information Officer Santiago Garces noted in reference to the City of Boston’s interim guidelines for use of Gen AI, “technology enables our work, it does not excuse our judgment nor our accountability.”
In addition, employees using Gen AI technology in cities like Boston, Seattle, Tempe, Ariz., and San Jose are required by their respective policies to fact-check the accuracy of generated content and disclose the use of AI in content creation.
Santa Cruz County, Calif. has a policy reminding staff to treat AI prompts as though they were visible to the public online and subject to public record requests. And the City of Boston’s policy stresses the significance of protecting resident and customer privacy by never sharing sensitive or personally identifiable information in prompts.
Expanding access to justice and providing mechanisms for safe innovation
A topic that has sparked debate in recent years is the use of Gen AI tools for legal interpretation. In a 2022 publication from the Yale University Journal of Law and Technology, the potential benefits and risks of these tools are outlined. The publication highlights the shortage of civil legal aid attorneys and the limitations of pro bono work, and it envisions how non-lawyers can greatly reduce the cost and increase accessibility of legal services.
Risks to moving too quickly in this direction include the danger of inherent bias from existing digital records on which AI tools rely, as well as the fact that the ability of Gen AI to recognize patterns does not necessarily apply to the nuanced judgment needed to inform legal advice. Another publication, Vanderbilt University’s Journal of Entertainment and Technology Law issued earlier this year, suggests that fields of law with more stability, like trust law, may be better initial candidates for early AI application.
One major finding in this report is that Gen AI use can shift the legal industry away from hourly billing towards flat-fee service provision as document automation is streamlined. This is in alignment with the Thomson Reuters Future of Professionals report findings, which indicates that less credentialed employees can now complete work with AI assistance that previously required credentialed employees at higher hourly rates.
The State of Utah made history this year by becoming the first to launch a legal services innovation sandbox, called the Office of Legal Services Innovation, which oversees non-traditional legal businesses and legal service providers with the aim of ensuring that consumers have access to innovative, affordable, and competitive legal services. The agency actively supports emerging tools and platforms that offer unique and creative approaches of providing legal services, particularly to historically underserved communities. To prevent harm, entities within the sandbox are audited monthly to measure utilization and any potential harm.
Of course, a major barrier to collaboration between legal service experts and technologists in advancing AI for legal services is the American Bar Association’s restrictions on non-attorneys owning or investing in law firms. While this restriction is intended to safeguard the independent judgment of attorneys, it adversely hampers innovation and collaboration between the legal and technology sectors.
Innovation and experimentation should, of course, be deployed first in areas where the risk of harm is less. For example, Santa Cruz County’s AI usage policy explicitly advises employees against using AI tools in critical decisions related to hiring or other sensitive matters where bias could play a negative role.