Skip to content
Agency Operations

Balancing innovation and regulation: AI use in government agencies

Allyson Brunette  Workplace Consultant

· 6 minute read

Allyson Brunette  Workplace Consultant

· 6 minute read

How government agencies come to use generative AI and other innovative technologies in their operations will largely depend upon how the regulatory scheme unfolds

Artificial intelligence (AI) is rapidly becoming an integral part of our daily lives and is expected to be the top factor impacting our professional lives over the next five years, according to Thomson Reuters’ recent Future of Professionals survey report. Of course, this begs the question: Who sets the rules in this newly established playing field?

There is consensus among the public that regulation is not only necessary but also highly desirable, according to a global KPMG study on trust in AI, in which 71% of participants shared this view. In the U.S., Axios conducted a study in which 56% of respondents said they preferred federal regulations over self-imposed regulations by the tech industry.

The slow progress of AI regulation

The federal government, primarily led by the legislative and administrative branches, has opened the conversation on the regulation of AI but has yet to formalize any concrete policies. U.S. Senate Majority Leader Chuck Schumer (D-NY) introduced the SAFE Innovation Framework, which focuses on the importance of establishing regulatory guard rails, fostering collaboration between tech industry leaders, and creating insight forums to educate members of Congress about emerging AI technology. Further, decisions are pending regarding whether new agency creation is merited or if existing agencies can independently oversee AI within their respective sectors.

All this stands in sharp contrast to the European Union’s proactive regulatory approach, in which its AI Act has already pre-emptively banned high-risk applications, leading to the prohibition of some public tools like OpenAI’s ChatGPT in several European nations due to security concerns.


The federal government, primarily led by the legislative and administrative branches, has opened the conversation on the regulation of AI but has yet to formalize any concrete policies.


The U.S. Federal Trade Commission — jointly with the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission — released a statement asserting their authority to regulate AI, to better ensure fairness, equality, and justice in AI systems.

Indeed, unregulated AI poses various risks, such as the spread of deep fake misinformation campaigns, extensive personal data collection, and employment disruption. These potential harms fall under federal jurisdiction and encompass issues like unlawful discrimination in housing and employment, improper data collection practices, and harmful outcomes that endanger consumers.

State and local government approaches

As the federal government lags behind, nearly half of U.S. states, primarily those along the coasts, have either enacted or proposed legislation related to algorithmic decision-making as of this past summer. The most comprehensive legislation thus far is NYC 144, which governs automated employment decision tools in New York City. Across these states, the prevailing trend is focused on consumer privacy, which can allow individuals to opt out of profiling, request the deletion of collected data, or at a bare minimum, be notified when algorithmic decision-making and data collection are used.

Two states in particular, California and Pennsylvania, have taken a broader approach, focusing on the overall impact of algorithmic tools in decision-making rather than the specific tools. California emphasizes the disclosure of algorithmic tool usage to enhance transparency, while Pennsylvania requires the sharing of systemic information about these tools, including impact assessments that evaluate their functionality and effects on people.

Tech industry perspectives

Predictably, technology advances more swiftly than do government regulations. Meanwhile, leaders from seven major tech corporations have introduced their own set of principles to enhance AI safety. Sam Altman, CEO of OpenAI (parent company of ChatGPT), advocates for a licensing and registration model for AI tools that exceed a crucial threshold of capabilities. However, some critics argue that this approach may limit competition.

Microsoft President Brad Smith describes AI as a co-pilot with the potential to change the world but emphasizes the need for human control and regulatory safeguards to act as emergency safety measures. Industry leaders share the perspective that speed in implementation is critical. While some state leaders have imposed temporary halts on emerging AI technology, congressional leaders appear inclined to support innovation without unnecessary hindrance, a move which the tech industry understandably supports.

The ability to explain and understand AI tools, as well as their transparency, remains an ongoing dilemma. A 2023 survey from Axios indicates that 62% of respondents are concerned about the future impacts of artificial intelligence.


Microsoft President Brad Smith describes AI as a co-pilot with the potential to change the world but emphasizes the need for human control and regulatory safeguards to act as emergency safety measures.


Because algorithms often function as black boxes, producing useful outputs without clear explanations for their conclusions, it becomes crucial that government agencies which utilize AI must have a core set of principles that emphasize transparency. Unexplainable algorithmic decision-making tools run the risk of generating dangerous, unethical, or non-transparent outcomes. Further, a Goldman Sachs’ report predicts significant economic disruptions due to algorithmic tools, affecting over 300 million jobs in both the U.S. and Europe.

Yet, as the Thomson Reuters’ recent Future of Professionals survey report points out, there are two contrasting perspectives emerging for our future: one of opportunity and one dystopian. In the former, AI leaders are pointing to the growth opportunities and cost savings which will emerge from the development of new tools, services, and the identification of new markets. AI will likely shift more work in-house, modify pricing structures, and move tasks to lesser credentialed individuals. Indeed, experts point to the need for a fundamental understanding and new skill sets for the workplace due to the rapid operationalization of AI.

Almost 90% of professionals surveyed in the Thomson Reuters report said they expect some basic level of AI training for professionals over the next five years. It will be critical to upskill and reskill existing professionals in how to use these innovative tools and how to continually understand and correct our unconscious biases.

Widespread implementation of the technology will rely heavily on low-code or no-code templates, and it is vital that these templates are designed with agency mission, ethical principles, and safeguards against discrimination in mind. Third-party evaluations and performance standards also will have to be measured in order to prevent unintended harm from these tools.

More insights