Skip to content
AI & Future Technologies

Wall Street research & ChatGPT: Firms face legal risks over transparency, client relations

Henry Engler  Thomson Reuters Regulatory Intelligence

· 5 minute read

Henry Engler  Thomson Reuters Regulatory Intelligence

· 5 minute read

How will ChatGPT and AI change the way Wall Street investment houses generate market analysis and other reports? Regulators may be interested to know.

The one area of Wall Street that is ripe for artificial intelligence (AI) disruption is investment research — the reams of reports churned out daily by legions of analysts. When considering applying ChatGPT or other AI applications to research content, however, Wall Street investment banks and other financial services firms might be well advised to pause and think through some of the unclear and thorny legal risks — an area in which technology appears to be running ahead of the law.

There seems little question that AI will lead to an upheaval among U.S. investment banks and brokerage firms. In a recent report, Goldman Sachs estimated that 35% of employment in business and financial operations is exposed to so-called generative artificial intelligence, which can generate novel, human-like output rather than merely describing or interpreting existing information. Indeed, ChatGPT is a generative AI product from research laboratory OpenAI.

While the Goldman Sachs analysis did not drill down to AI’s specific impact on investment research, Joseph Briggs, one of the report’s authors, said that “equity research is a bit more highly exposed, at least on an employment-weighted basis.”

ChatGPT & Fedspeak

There are many questions over how far AI applications can go in replacing human input and analysis, but new academic research suggests that ChatGPT can perform certain Wall Street tasks just as well as experienced analysts — even those tasks that may appear more nuanced in nature.

new study from the Federal Reserve Bank of Richmond used Generative Pre-training Transformer (GPT) models to analyze the technical language used by the Federal Reserve to communicate its monetary policy decisions. Experts on Wall Street whose job it is to predict future monetary policy decisions — also known as Fed watchers — apply a blend of technical and interpretive skills in reading through the often opaque and obscure language that Fed officials use in their communications with the public.


There are many questions over how far AI applications can go in replacing human input and analysis, but new academic research suggests that ChatGPT can perform certain Wall Street tasks just as well as experienced analysts.


GPT models “demonstrate a strong performance in classifying Fedspeak sentences, especially when fine-tuned,” the analysis said, cautioning, however, that “despite its impressive performance, GPT-3 is not infallible. It may still misclassify sentences or fail to capture nuances that a human evaluator with domain expertise might capture.”

Fed watchers are also known to make errors in judging future monetary policy decisions, which raises questions about how ChatGPT and similar technology could be applied to less-nuanced Wall Street tasks, such as company earnings projections or more fundamental industry research.

Laws regarding AI usage lag innovation

Just how should investment banks and other investment firms approach the use of ChatGPT in their research efforts and communications with clients? The short answer from legal experts is, cautiously.

“The state of AI regulation in the U.S. is still in its early stages,” said Mary Jane Wilson-Bilik, a partner at the law firm Eversheds Sutherland in Washington, D.C. “Many regulatory agencies have issued guidelines, principles, statements, and recommendations on AI… but laws specific to AI and ChatGPT are relatively few.”

That is not to say regulations will not be forthcoming. In late April, four U.S. federal agencies issued a joint statement warning of the “escalating threat” from fast-growth artificial intelligence applications, citing a range of potential abuses. The agencies called on firms to actively oversee the use of AI technology, including ChatGPT and other “rapidly evolving automated systems.”


 Four U.S. federal agencies issued a joint statement warning of the “escalating threat” from fast-growth artificial intelligence applications, citing a range of potential abuses.


The Securities and Exchange Commission has indicated it plans to issue a rule proposal on decentralized finance tools this year, but it is unclear whether the proposal will require specific disclosures on whether AI/ChatGPT was used when providing advice or reports to customers.

Given the regulatory vacuum on specific rules for Wall Street research, Wilson-Bilik cautioned firms on how they use and disclose AI and ChatGPT in their research products. “While there are no legal requirements just yet to tell clients that AI was used in the writing of a report or analysis, it would be best practice,” she said. “Some firms, out of an abundance of caution, are adding language about the possible use of AI into their online privacy policies.”

While clients do not currently have a legal “right to know” whether AI was used in generating a research report, “risks would arise if the client was misled or deceived on how AI was used,” Wilson-Bilik explained. “If firms use AI in a misleading or deceptive way — for example, by implying or stating that results are human-generated when the results are a hybrid or mostly AI-generated — that would be a problem under the anti-fraud statutes.”

Legal experts also warn that AI tools should be checked for accuracy and for bias. Without robust guardrails, there could well be cause for regulatory action or litigation.

More insights