Skip to content
Compliance & Risk

2025 Predictions: How will the interplay of AI and fraud play out?

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 5 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 5 minute read

The rise of sophisticated AI tools, such as GenAI and LLMs, has introduced new challenges in the fight against fraud; however, these same technologies also offer powerful solutions for detecting and preventing these crimes

As we move forward to 2025, it is evident that fraud persists and that those intent on exploiting the nation’s corporate, government, and financial systems will continue their efforts. With every technological advancement, new scams and fraudulent enterprises emerge.

In 2023, reported fraud losses surpassed $10 billion, representing a 14% increase from 2022. And in the first quarter of 2024 alone, consumers reported losing $20 million to government impersonation scams involving cash payments. This figure pertains to just one type of scam over a single quarter. As the year advances, it is expected that the total number of reported scams will continue to increase annually.

Types of fraud on the rise

Due to the sophistication of artificial intelligence (AI), generative AI (GenAI), retrieval-augmented generation (RAG), and other large language model (LLM) tools, the complexity in fraud schemes is growing. There are several areas to keep an eye on, including:

Deep fakes of documents

GenAI now has the capability to create high-quality deepfakes of identification documents. These deepfakes are so convincing that they include shadows and other markers of authenticity. In November 2024, the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued an alert specifically encouraging the review of identification documents. The agency reported “an increase in suspicious activity reporting by financial institutions, describing the suspected use of deepfake media, particularly the use of fraudulent identity documents to circumvent identity verification and authentication methods.”

FINRA provided further context, noting that fraudsters can use GenAI to create convincing fake ID documents — such as driver’s licenses or professional credentials — that might also incorporate AI-generated images. Illicit actors then can use these documents to verify identity to fraudulently open a new account or to take over an existing account. Ping Identity’s 2024 survey, Fighting the Next Major Digital Threat: AI and Identity Fraud Protection Take Priority, found that 97% of organizations are having difficulty verifying identity.

Deep fakes of videos

AI algorithms have the capability to alter or substitute faces in video footage. Such manipulated videos can be misused as business records, disseminated as false news stories, or, in certain instances, presented as evidence in judicial proceedings. These videos also can be used to further convince victims of a scam’s legitimacy. This scam is already being carried out by foreign-based groups, such as the so-called Yahoo Boys in Africa.

GenAI-enhanced scams

GenAI also can be utilized in ways that enhance the credibility of fraudulent schemes. It can improve the grammar or address other issues in emails and websites, making them more convincing. Further, LLMs are capable of creating sophisticated chatbots, which further enhance the plausibility of these scams.

Using AI to prevent fraud

There is some good news too. By this year, individuals and organizations can effectively combat fraud through increased vigilance and awareness of AI.

Thinking logically and acting deliberately

It’s critical that all parties look for signs of AI use, as it is essential to pay close attention to subtle cues and discrepancies that can tip off the user to possible AI trickery. Also, it’s important to think rationally rather than emotionally. If you receive a call requesting immediate action, it is essential to respond logically and verify the information provided. Never simply give your personal information over the phone or email.

Indeed, always act deliberately. Before transferring any funds, ensure that the transaction is traceable and conducted through reputable sources, such as banks. And report all suspected and actual fraud incidents promptly to ensure thorough tracking and documentation.

Making AI part of the team

In the Thomson Reuters’ recent Future of Professionals report, 78% of all surveyed professional service workers said they believe AI is a force for good in their profession, indicating that people are thinking positively about AI. Indeed, it is important to see AI as a part of your organization’s fraud-fighting team, rather than as a tool of the enemy.

Organizations also can combat fraud using AI-based tools, and LLMs and RAG assistants also could be valuable assets in fraud detection. In fact, LLMs excel at natural language processing and can analyze text within transactions, emails, or messages to identify suspicious language, unusual phrasing, or patterns that may indicate fraudulent intent. For instance, LLMs can track anomalies in speech or writing that would link the same party attempting to open multiple accounts. LLMs also can summarize complex fraud cases by extracting key information from various documents, which could save investigators time by providing a concise overview of the documents.

On the other hand, RAG assistants, which can access and process vast knowledge bases, can cross-reference transactions with external data sources to flag suspicious transactions and identify discrepancies or anomalies that might signal fraud.

Taken together, these tools can analyze historical fraud data and help build predictive models to identify potential future fraud attempts based on similar patterns. Essentially, these AI tools can act as intelligent assistants, augmenting the capabilities of human investigators and enhancing the speed and accuracy of fraud detection.

Enhancing ID verification

As fraud cases continue to increase and gain in complexity due to the advancements in GenAI and other advanced tech, it is imperative for companies, financial institutions, and government agencies to enhance their identity verification processes. In the forthcoming year, these verification tools will progressively address some of the security gaps that have emerged because of the rapid evolution of GenAI.

As we look ahead to 2025, it is clear that the landscape of fraud is evolving rapidly, driven by advancements in AI. The rise of sophisticated AI tools, such as Ge AI and LLMs, has introduced new challenges in the fight against fraud. However, these same technologies also offer powerful solutions for detecting and preventing fraudulent activities.

By leveraging AI to analyze text for anomalies, flag suspicious transactions, and automate case summarization, organizations can enhance their fraud detection capabilities and stay one step ahead of fraudsters.


You can find more about detecting and fighting fraud here.

More insights