As the regulatory picture around the use of AI continues to evolve, many financial services firms already are finding use cases for the new technology and implementing it into their operations
Despite the widespread enthusiasm surrounding artificial intelligence (AI), its generative AI (Gen AI) component, and the enormous potential benefits of both, the laws and regulations governing the new technology remain sparse. With nearly a dozen US states enacting AI-related legislation, international bodies developing practice standards, and the recent White House Executive Order on AI and the European Union’s agreement on its EU AI Act, a roadmap to regulation is taking shape.
Pieces of the global regulatory puzzle are now on the table. All signs point to a complex patchwork of laws and regulations, such as cybersecurity and data privacy rules. Unsurprisingly, developing a comprehensive and cohesive AI regulatory framework will be a lengthy process. Below is a review of potential AI benefits for financial services firms that also describes the likely regulatory path ahead and offers some preliminary, high-level compliance suggestions for firms building AI into their operations.
Use cases for AI
AI is already in use at most firms in its various forms. Algorithmic trading, risk modeling and surveillance programs are obvious basic examples. Many firms have been using chatbots to assist customers with routine questions and account requests. Such customer engagement tools are a valuable time-saver and productivity-enhancer for customer-facing personnel. The complexity and capabilities of these tools will spread to other areas and will only improve with better AI enhancements.
The use of AI will surely increase operational capacity and productivity within firms. AI models, sometimes called digital workers, are immediately productive from day one, never get sick or take time off and are often faster and more accurate than their human counterparts. These 24/7 workers will help financial services firms gain efficiency and reduce manual reviews of automated events, as AI-augmented tools have been proven to significantly reduce false-positive alerts requiring such reviews.
AI may also help unify distinct data silos to draw new information and correlations that were previously impossible or unseen. Intelligent document-processing helps to uncover relevant adverse media on subjects of interest in anti-money laundering (AML) and know-your-customer (KYC) investigations. Automated adverse media and sanctions reviews can improve firms’ ability to discover hidden risks among current and prospective customers, vendors and third parties.
AI regulatory roadmap
Like cybersecurity and data privacy laws and regulations, AI is rapidly becoming the next critical obligation for firms. It will become a permanent pillar within all financial services firms’ risk, legal and compliance frameworks. Also, like cybersecurity and data privacy measures, AI regulations will require simultaneous focus on global, federal, state and industry-specific levels, as there will likely be multiple layers — a proverbial patchwork.
Europe has taken the lead globally on AI governance, with the EU’s Dec. 8 release of the EU AI Act. The European Commission also recently announced an agreement by G7 leaders on a set of international guiding Principles on AI and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. The Principles and the Code of Conduct will complement, at the international level, the legally binding rules that the EU intends to codify in the Act.
Further, nearly a dozen US states have enacted legislation on AI, and legislation is pending in almost a dozen additional states. Many of the measures are included in consumer privacy or industry-specific areas, such as healthcare, government, or insurance, according to the non-profit Electronic Privacy Information Center.
At the federal level in the US, the proposed American Data Protection and Privacy Act outlines rules for AI, including risk assessment obligations that would directly impact companies developing and utilizing AI technologies. The bill, proposed more than a year ago, remains stalled in Congress. The US government also has issued guidance through the National Institute of Standards and Technology (NIST), such as the AI Risk Management Framework and the Secure Software Development Framework.
Several courts have also spoken on the use of AI, such as the U.S. Fifth Circuit Court, which proposed that lawyers certify the use of AI in briefs and court filings. The U.S. Court of International Trade issued similar warnings to lawyers.
The White House laid out some principles and priorities in the Blueprint for an AI Bill of Rights, published in October 2022. On Oct. 30, 2023, the White House published an Executive Order directing US government departments and agencies to evaluate AI technology and implement processes and procedures to govern its adoption and use. The Executive Order was accompanied by a fact sheet that summarized the 20,000-word order into a more manageable and reader-friendly 1,900 words.
As financial services firms create, establish and adopt new compliance, risk and legal policies and procedures surrounding AI, they must view AI as any other compliance obligation. Although the regulatory picture is uncertain, essential compliance obligations can and should be applied. Core compliance principles such as training, testing, monitoring and auditing are all essential in developing AI policies.
Firms should also be sure to include legal counsel, either in-house or external who have the expertise in the relevant areas, because certain existing contracts with data sources and vendors may prohibit the use of some information by AI models. Copyrighted material is also a concern, so financial services firms should carefully review all existing contracts with their customers and vendors.
Firms must also perform a cost-benefit analysis for their AI projects, because as eager as they are to innovate with AI, they may find that some legacy solutions are cheaper and more effective. Firms must also prioritize data quality and security because the data they use to train AI models will determine the accuracy and fairness of those models.
All AI endeavors should be run parallel to existing programs and be thoroughly checked for accuracy, and the processes must be documented and audited. The final output of AI models must include a report that can be saved for audit purposes.