Skip to content
Governance

Artificial intelligence use poses an ESG headache for global financial industry

Lindsey Rogerson  Senior Editor / Thomson Reuters

· 6 minute read

Lindsey Rogerson  Senior Editor / Thomson Reuters

· 6 minute read

Artificial intelligence (AI) is often touted as the cure-all for financial services firms' ability to deal with the looming data onslaught stemming from environmental, social & governance (ESG) regulation. Yet ESG also poses an existential threat to the financial services industry's use of AI

The European Union’s Sustainable Finance Disclosure Regulation has required asset management firms to begin collecting millions of data points from the companies in which they invest, and the forthcoming Corporate Sustainable Reporting Directive will only add to the volume of data points. Further, there is the data being collected under the Task Force on Climate-Related Financial Disclosures (TCFD) initiative and the International Sustainability Standards Board’s plans to create a baseline for ESG reporting.

Taken all together and it becomes clear that AI-enabled systems will be essential to firms’ efforts to make sense of — and profit from — all these requirements.

Potential problems for financial services firms using AI lurk beneath all three columns of E, S and G, however. The carbon footprint from storing and processing data is enormous and growing, algorithms have already been shown to discriminate against certain groups in the population, and a lack of technology skills in both senior management ranks and the general workforce leave firms vulnerable to mistakes.

Environmental: Carbon footprint of energy use

According to the International Energy Agency, electricity consumption from cooling data centers could be as much as 15% to 30% of a country’s entire usage by 2030. Running algorithms to process data also requires energy consumption.

Training AI for firms’ use has a big environmental impact, according to Tanya Goodin, a tech ethicist expert and fellow of the Royal Society of Arts in London. “Training artificial intelligence is a highly energy-intensive process,” Goodin says. “AI are trained via deep learning, which involves processing vast amounts of data.”

Recent estimates from academics suggest that the carbon footprint from training a single AI is 284 tons, equivalent to five times the lifetime emissions of the average car. Separate calculations put the energy usage of one super-computer as the same as that of 10,000 households. Yet, accounting for this huge electricity use is often hidden. Where an organization owns its data centers, the carbon emissions will be captured and reported in its TCFD scope 1 and 2 emissions. If, however — as happens at an increasing number of financial firms — data centers are outsourced to a cloud provider, emissions drop down to scope 3 in terms of TCFD reporting, which tends to take place on a voluntary basis.

“I think it’s a classic misdirection — almost like a magician misdirection trick,” Goodin explains. “AI is being sold as a solution to climate change, and if you talk to any of the tech companies, they will say there’s huge potential for AI to be used to solve climate problems, but actually it’s a big part of the problem.”

Social: Discriminating algorithms & data labelling 

Algorithms are only as good as the people designing them and the data on which they are trained, a point acknowledged by the Bank for International Settlements (BIS) earlier this year. “AI/ML [machine learning] models (as with traditional models) can reflect biases and inaccuracies in the data they are trained on, and potentially result in unethical outcomes if not properly managed,” BIS stated.

Kate Crawford, co-founder of the AI Now Institute at New York University, has gone further in warning of the ethical and social risks embedded in many AI systems in her book Atlas of AI. “[The] separation of ethical questions away from the technical reflects a wider problem in the field [of AI], where responsibility for harm is either not recognized or seen as beyond the scope,” Crawford says.

It is perhaps unsurprising, therefore, that mortgage, loan, and insurance firms have already found themselves on the wrong side of regulators when the AI they used to make lending and insurance pricing decisions turned out to have absorbed and perpetuated certain biases.

In 2018, for example, researchers at the University of California-Berkeley, found that AI used in lending decisions was perpetuating racial bias. On average, Latino and African American borrowers were paying 5.3 basis points more in interest on their mortgages than white borrowers. In the UK, research by the Institute and Faculty of Actuaries and the charity Fair By Design found that individuals in lower-income neighborhoods were being charged £300 more a year for car insurance than those with identical vehicles living in more affluent areas.

The UK Financial Conduct Authority (FCA) has repeatedly warned firms that it is watching the way they treat their customers. In 2021, the FCA revised pricing rules for insurers after research showed that pricing algorithms were generating lower rates for new customers than those given to existing customers. Likewise, the EU’s AI legislative package looks set to label algorithms used in credit scoring as high-risk and impose strict obligations on firms’ use of them.

Financial firms also need to mindful of how data has been labeled, Goodin agrees. “When you build an AI, one of the elements that it still quite manual is that data has to be labeled. Data labelling is being outsourced by all these big tech companies, largely to Third World countries paying [poorly],” she notes, adding that these situations are akin to “the disposable fashion industry and their sweatshops.”

Governance: Management does not understand the technology

Turning to governance, the biggest issue for financial services firms is a lack of technologically skilled staff, and that includes those at the senior management level.

“There is a fundamental lack of expertise and experience in the investment industry about data,” says Dr. Rory Sullivan, co-founder and director of Chronos Sustainability and a visiting professor at the Grantham Research Institute on Climate Change at the London School of Economics.

Investment firms are blindly taking data and using it to create products without understanding any of the uncertainties or limitations that might be in the data, Sullivan says. “So, we have a problem of capacity and expertise, and it’s a very technical capacity issue around data and data interpretation,” he adds.

Goodin agrees, noting that all boards at financial firms should be employing ethicists to advise on the use of AI. “Quite a big area in the future is going to be around AI ethicists working with corporations to determine the ethical stance of the AI that they’re using,” she says.

“So, I think bank boards need to think about how they’ll access that.”

More insights