Skip to content
Compliance & Risk

Fraud risk must be managed as pandemic fuels AI use for customer onboarding, experts say

Brett Wolf  Regulatory Intelligence

· 5 minute read

Brett Wolf  Regulatory Intelligence

· 5 minute read

As financial institutions increasingly rely on artificial intelligence & other emerging technologies to increase the efficiency of customer onboarding, fraud risk must still be addressed

Wall Street firms have turned to artificial intelligence, machine learning, and other emerging technologies to increase the efficiency of customer onboarding and verification during the COVID-19 pandemic, making it vital that firms monitor and test these tools, an official with the brokerage industry’s self-regulator said.

“You want to avoid the ‘set it and forget it’ mentality where you have this vendor doing what it’s doing and I’m relying on it entirely. You want to be reasonable and just make sure you’re kind of testing these things,” said Jason Foye, senior director of the anti-money laundering investigative unit at the Financial Industry Regulatory Authority (FINRA). Foye’s comments came during a remote event held this past week by the Florida International Bankers Association (FIBA).

The new technology has helped brokerage firms and other financial services companies meet their compliance obligations as the pandemic fueled a rise in online account openings, said Sergio Alvarez-Mena, a partner with the law firm Jones Day. Alvarez-Mena served as moderator for FIBA’s webinar. “A lot of folks have been very creative, very innovative and used artificial intelligence and similar technologies to make up for the inability to physically kick the tires.”

From both from a cybersecurity perspective and a reliability perspective, firms need to ensure that any technological tool is “doing what it’s supposed to be doing,” Foye explained. “Is the machine learning what it’s supposed to be learning? Is it working properly and secure from a cybersecurity standpoint?”

FINRA is “seeing more firms utilize vendors that fall into robotics or machine learning or AI to focus on client onboarding and verification,” Foye added. “You’ll see firms that offer tools for onboarding in three minutes, verification in 90 seconds — whatever the case may be.”

He noted, however, that “fraudsters and bad actors are always going to try to exploit whatever the new environments are.”

Risk of synthetic ID fraud

A key risk during onboarding is attacks by synthetic-identification fraudsters, Foye said. Synthetic identity fraud involves criminals creating new identities rather than stealing an existing one by combining real and fake personal identifier information such as names, addresses, dates of birth and social security numbers to generate new identities that provide a veil for illicit transactions.

“You’ll see situations where fraudsters and other bad actors create fake documents in an attempt to trick (AI and other technology-based systems) into thinking this is the person who they say it is, and verifying it’s the right person,” Foye said. “They’re very creative and clever about how they go about tricking these systems. The document they use look real, and at times, they can… confuse or trick systems that these tools are relying on. It’s a complicated risk.”

FINRA, the U.S. Securities and Exchange Commission, and the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) have all publicly warned of synthetic identity fraud.

Remote working dangers

The overall risk situation deteriorated when the COVID-19 pandemic drove many compliance teams to work remotely, Foye said. “Everyone went remote, and bad actors evolved to try and attack that new environment. These synthetic identification frauds are one (approach),” he noted, adding that FINRA will, in its oversight of member firms, look for them to have “some type of reasonable monitoring and testing of the product itself.”

“Is it doing the things it’s supposed to be doing? Is it flagging the things I expect it to be flagging? And (is there) some reasonable governance, whatever that may be for the firm?” he asked.

Foye shared some “red flags” that firms should be alert for in their vendor management:

      • The spelling of names that’s different from government issued identification.
      • Pictures or other elements of documentation that are blurry or look like they have been manipulated.
      • A document that does not pass the smell test: the corners do not look right or there is a shadowing or double faces.
      • Signs of digital manipulation to a picture or information.
      • IP addresses associated with a customer’s log-in that do not match the address of the customer being onboarded, or customer log-ins from a single IP address across multiple and seemingly unrelated accounts.

“These are just examples of red flags firms can kind of look for on the back end to say, ‘Is anything getting through the system that we need to be aware of and work with the vendor on to try to eliminate over time?’” Foye said.

Jones Day’s Alvarez-Mena agreed, saying that synthetic-identity fraud has reduced the labor costs for criminals who previously had to hire “mules” to walk into the bank and try to open accounts. “Now it’s all smash-and-grab from… lord knows where,” he said.

Anytime a firm uses some type of a vendor, a third party, to assist in some aspect of client onboarding, suspicious activity monitoring, or other compliance-oriented task that AI and machine learning can help with, “vendor management is always going to be critical,” Foye added.

More insights