Skip to content
Agency Operations

Responsible use of AI in government fraud prevention

Allyson Brunette  Workplace Consultant

· 6 minute read

Allyson Brunette  Workplace Consultant

· 6 minute read

How can government agencies best use generative artificial intelligence applications to fight fraud that is growing more sophisticated with each passing month?

As the private sector invests heavily in generative artificial intelligence (AI) applications, the public sector is playing catch-up. To combat sophisticated fraud attempts, government agencies are increasingly turning to generative AI, while carefully considering the philosophical, functional, and ethical challenges of deploying AI technology.

The wide range of new government benefit programs released during the COVID-19 pandemic set the stage for an uptick in government fraud activity. Three years later, fraudulent activity has not slowed, and experts predict that these behaviors will persist.

Large-dollar government programs remain the biggest targets of fraudsters, with illicit billing schemes and forged documents being to apply for legitimate benefits remaining the top forms of governmental fraud. State agencies are more vulnerable to such fraud than local governments due to the breadth of programs and dependence on online application portals. The situation is worsened because of limited government budgets, a loss of institutional knowledge due to large-scale retirements, and an increased workload in understaffed government offices.

The Wild West of AI experimentation

Federal agencies seem to be embracing AI more rapidly compared to state agencies, according to a 2020 report by the Administrative Conference of the United States, which revealed that 45% of 142 federal departments, agencies, and subagencies were already experimenting with AI tools.

While the federal government follows the lead of private sector companies in the financial and healthcare industries in adapting to AI, government tools still have some catching up to do. Further, government AI tools lag in sophistication, are mostly developed internally, and lack clear reporting on how effectively they meet agency needs, the report showed.

Indeed, AI has particular strengths in identifying unique activities that don’t conform to regular patterns across large data sets, offering the opportunity to reduce fraud and reduce the costs of detection for many organizations. As individual actors and organized crime rings continue to target government benefit programs with high-tech schemes, government agencies must stay up-to-date with AI technology to effectively detect and prevent fraud attempts.

Challenges to AI implementation

The adoption of the AI Training Act, enacted in October 2022, highlights the federal government’s future approach to using AI technology. The new law instructs federal agency employees to explore AI’s potential benefits for the federal government, while also considering the risks and ensuring privacy, safety, and reliability in its use.

Philosophically, the use of AI in government decision-making is challenging because many AI tools lack transparency. Government decisions, especially in administrative matters, require an explanation when services or benefits are denied to the public. Government organizations, which are held to high standards of transparency and accountability, could restrict some elements of AI’s deep learning in order to make the decisions reached more understandable.

However, agencies would want to avoid disclosing their internal AI tools, so as not to risk bad actors reverse-engineering government tools to continue to perpetuate fraud, which is counterintuitive to expectations of government transparency.

Functionally, the implementation of AI across federal agencies may face challenges from the 49-year-old Privacy Act, under which federal agencies are prohibited from sharing data across organizations. Yet generative AI relies on an increasingly large sample of standardized data.

Mitigating risks of model bias

As of 2020, federal government agencies using generative AI lacked the protocols to assess potential model bias, which poses risks to both program beneficiaries and the agencies themselves. Model bias is a known susceptibility of AI, in which data used to train a model may lead to systematic errors, favoring one outcome over others. Machine learning may further encode this bias, and discrimination becomes more likely with a larger data set as it may flag protected characteristics with a higher probability. This can result in unfair impacts on certain protected groups that would damage public trust.

Users within organizations must prioritize ensuring that their AI systems are trustworthy — meaning fair, transparent, accountable, safe, secure and private. Examples of this problem continue to flare up. For example, the U.K. Department of Work and Pensions faced criticism for their lack of transparency in the deployment of AI against welfare fraud during the COVID-19 pandemic. The department’s algorithms were not fully explained, and their ability to test for unfair impacts was limited.

Further, a 2023 IBM study found that more than half of the 3,000-plus CEOS interviewed expressed concern about cybersecurity issues related to the use of generative AI, a threat that is exacerbated by the continued growth of the Internet of Things.

In response to these concerns, the State of Maine’s Chief Information Security Officer Nathan Willigar took the newsworthy step of declaring a six-month pause on use of generative AI in state government. The purpose of this pause is to assess risk and develop policies and procedures across state agencies. One of the major reasons Willigar offered for the pause is his concern of adversarial uses that could threaten cybersecurity.

For these and other reasons, it could be argued — from an ethical standpoint — that generative AI is not an appropriate tool to use across all government systems, such as the justice system. While AI can reduce barriers for access to justice for vulnerable populations and cut costs, some argue that the human-guided nature of justice cannot be replicated by a machine.

Striking the right balance between utilizing AI and maintaining human-guided systems is the fine line government agencies must navigate. If the public suspects that AI use leads to unfair or inequitable outcomes, they will be less likely to be supportive of government innovation. Therefore, before fully embracing AI, government organizations must prepare their teams for its responsible use.

How governments can best prepare for AI implementation

Enhancing baseline user education on how algorithms work, avoiding and correcting model bias, and standardizing and compiling data in a way that respects basic human rights, including privacy, are critically important steps for government agencies as they move to implement AI-back tools and solutions.

The private tech sector often employs the concept of failing fast to learn quickly from mistakes. Public agencies might want to take a page from that book in order to improve product delivery and maintain public trust by creating sandboxes where AI can be internally tested, evaluated, and refined before public deployment.

Government agencies need to carefully consider the ethical implications of using generative AI in different contexts. Agencies also should collaborate with subject matter experts, including lawyers, ethicists, and social scientists, to address the negative risks of AI and mitigate them.