During a recent online ACAMS panel, experts discussed what anti-money laundering professionals need to do to protect their institutions and clients from cybercrime
Deepfake technology, a type of artificial intelligence that allows a user to create a very convincing replica of a person or image, has gained traction in the cybercriminal world. You may even have seen some of these more innocuous examples such as Queen Elizabeth dancing on a tabletop in an altered video or the Rev. Martin Luther King Jr. reanimated.
The bigger question being raised by these cybercrime tactics is how they will be used by criminal actors in the future. Fraudsters are attempting to leverage the technology to trick banks, scam innocent users, and simply wreak havoc in our already chaotic and confusing online world. We’ve seen one prominent example of a company CEO being scammed out of $243,000 by an AI voice technology trick. The commonality among these scams is that financial institutions are at the center, unwittingly transferring funds to the criminals.
In a recent online panel discussion at the Association of Certified Anti-Money Laundering Specialists (ACAMS) Hollywood event, Fraud in the 21st Century: How to Fight Bad Bots, Synthetic Identities and Deepfake AI, experts discussed what anti-money laundering (AML) professionals need to do to protect their institutions and clients from sophisticated cybercrime.
Financially motivated cybercriminals
When we think of cybercrime, we have to remember the criminals are operating like a sophisticated business, says panelist Elizabeth Roper, Bureau Chief in the Cybercrime and Identity Theft Bureau at the New York County District Attorney’s Office. There is a high level of organization behind these operations “whether we are talking about transnational cybercrime ring operating a bot-net monetizing Personal Identifying Information (PII) or 20-something-year-old hackers trying to steal cryptocurrency.”
Roper urged the audience not to underestimate the end-users of PII either — meaning the street-level criminals who purchase PII and later use it to commit massive fraud. “There is a tendency to think of those groups as less sophisticated and less organized because they are not as technical,” she says. “I think that is a mistake.”
How sophisticated are these criminals? They i) advertise and recruit on social media; ii) use secure communications and VPNs; and iii) optimize user experiences on their websites and forums.
A trending type of AI-enabled crime is the use of bots in account takeovers in which a person other than the account holder is accessing the account without permission. Account takeover is a combined form of identity theft and fraud. While it isn’t new, the method being used by fraudsters to deploy bots is a novel technique, and has been yielding lucrative results. Hackers will use stolen credentials purchased on the Darknet, then attempt mass log-ins to get access to the accounts. But it isn’t humans doing the work — it’s internet robotic computer programs, or bots, for short.
“Bots are being relied upon more and more for this criminal activity,” says panelist Rebecca Schauer Robertson, Senior Vice President and Director of the Financial Crimes Investigation Unit at Atlantic Union Bank. “They mimic human behavior and are easy to deploy.” These bots can also act more efficiently than humans, initiating multiple attacks for multiple applications all within seconds.
Panelists noted that another scam uses social engineering in which fraudsters gain your account login information or PII through social media or simply by using a deepfake or voice mimic to trick you into giving up your information.
Pandemic makes detection more difficult
Panelists uniformly agreed that the COVID-19 pandemic amplified cybercrime. “We are in uncharted waters, even though we have been preparing for this type of event for a decade,” says James Candelmo, CAMS, Senior Vice President and Chief AML Officer at Capital One Bank. Panelists described how, after billions of dollars were pumped into the economy in the form of financial stimulus or assistance, changes in consumer spending behavior fed by a move to ecommerce shopping helped fuel the fastest growth of electronic banking ever seen. But the group that moved to online banking the quickest is also the most vulnerable to fraud — those 55 years and older.
Panelists also discussed why bot-related crime is harder to stop. Simply put, it is easier for criminals to make their identities anonymous and hide who is behind the malicious attack and where the IP address is located. “Those are the things we are losing visibility into,” explains Atlantic Union’s Schauer Robertson. Panelists further agreed that while technology is being used against financial institutions, it should also be used by the banks themselves to allow them to become more proactive. “We need more training in the banks for how these virtual opportunities are occurring,” she adds. “If we don’t understand the components of the fraud that is happening from inception, we won’t know how to provide the correct resources from a staffing or technology standpoint.”
A good place to start, for example, is for fraud teams to look at your current processes and analyze failed login attempts and consumer complaints around common account takeovers.
How to be proactive against cyber-enabled fraud
Another panelist, Marcy Forman, Managing Director for Global Financial Crimes Investigations and Intelligence at Citigroup, says that one way financial institutions can be more proactive against cybercrime is to use their own data. Banks should consider utilizing a data analytics team that can take large amounts of data and identify trends and typologies in fraud, sanctions, and anti-money laundering to get ahead of account takeover and business email compromises, Forman says, adding that you need the right people and the right technology to create a “fusion” of resources to fight the bad actors.
Another way to fight cybercrime is to use machine learning against criminals. There is a tremendous opportunity to implement machine learning not just in an alert generation but account security, Capital One’s Candelmo explains. That would entail the technology “learning” to identify consumers’ spending patterns and recognize any diversions. “It would create another layer of security, but it would also create friction,” he says. “But I think the consumer would welcome it in this situation.”