Skip to content
Governance

New data reveals AI governance gap between policy and practice, creating ESG risks

Katie Fowler  Director of Responsible Business / The Thomson Reuters Foundation

· 6 minute read

Katie Fowler  Director of Responsible Business / The Thomson Reuters Foundation

· 6 minute read

The rapid adoption of AI without robust governance frameworks is creating ESG risks, and companies must prioritize transparent and ethical AI practices to meet evolving investor expectations

Key highlights:

      • The governance-implementation gap is alarming — While nearly half of companies have AI strategies and 71% include ethical principles, a massive disconnect in execution persists.

      • AI governance is now a material investor risk — AI disclosure among S&P 500 companies jumped to 72% in 2025 from 12% in 2023, and investors are treating AI governance as a critical factor in overall corporate governance.

      • Regional disparities signal competitive risks — European, Middle Eastern, and African companies are leading in AI governance (driven by regulatory pressure), while only 38% of US companies have published AI policies despite being innovation leaders.


Groundbreaking analysis of 1,000 companies indicates a widening chasm between the speed at which businesses are embracing AI and their preparedness to govern it effectively. These findings from the Thomson Reuters Foundation’s AI Corporate Data Initiative (AICDI), which offers a panoramic view across 13 sectors, are a wake-up call for every CEO, board member, and investor.

Indeed, nearly half (48%) of the companies sampled disclosed that they had AI strategies or guidelines in place, yet significant transparency gaps related to the environmental, social and governance (ESG) impacts of AI adoption remain.

When “ethical” principles lack substance

It is encouraging to see that 71% of companies with an AI strategy include principles around AI that include concepts such as ethical, safe, or trustworthy because this signals an awareness of the critical conversations happening around responsible AI. However, the AICDI data reveals a significant gap between stated principles and actual practice, more specifically:

      • Environmental blind spots — A staggering 97% of companies failed to consider the environmental impact of their AI systems, such as energy consumption and carbon footprint, when making deployment decisions. As AI models grow in complexity and scale, their energy demands will only increase. In addition, investors are likely to adopt green AI as a non-negotiable concept in the future.
      • Narrow social lens could open up reputational issues — More than two-thirds (68%) of companies with AI strategies did not adequately assess the broader societal implications of their AI technologies. Failure to understand and mitigate potential negative impacts on communities, vulnerable populations, or democratic processes is a recipe for reputational damage and legal challenges on the full spectrum of the human side of AI. Indeed, investors are growing more sophisticated in their understanding of these systemic risks.
      • Governance on paper and not in practice — While 76% of companies with an AI strategy reported management-level oversight, only 41% made their AI policies accessible to employees or required their acknowledgement. That means these policies are just words on paper if they are not understood, embraced, and actively practiced by those on the front lines of AI development and deployment. This gap in governance can lead to inconsistencies, unforeseen risks, and a fundamental breakdown in trust, both internally and externally.

Gaps in AI governance exist across regions and sectors

The AICDI data reveals fascinating regional and sectoral differences as well. For instance, companies in Europe, the Middle East, and Africa are generally ahead in publishing AI policies and establishing dedicated AI governance teams — action that is likely driven by the European Union’s looming AI Act. This highlights the proactive stance some regions are taking and offers a glimpse into what might become a global standard.

Despite the United States being a hub for AI innovation, only 38% of companies in the Americas published an AI policy. This discrepancy suggests a potential future competitive disadvantage for those lagging in governance.

Not surprisingly, sectors also varied in corporate oversight of AI initiatives. Financial, communication services, and information technology firms were more likely to have responsible AI teams than companies in energy and materials. This makes sense given their direct engagement with data and often consumer-facing AI applications, but it again points to a broader need for cross-sectoral AI governance best practices.

How companies can meet investor expectations

AI has rapidly become a mainstream enterprise risk. Fully 72% of S&P 500 companies disclosed at least one material AI risk in 2025, up from just 12% in 2023, according to the Harvard Law School Forum on Corporate Governance.

To attract and retain investor confidence, companies need to take concrete steps, including:

      1. Conducting a comprehensive AI audit — Companies need a thorough understanding of where AI is currently deployed across their products, operations, and services. The AICDI offers a to help with this, which allows companies to evaluate current AI governance maturity and benchmark themselves against peers.
      2. Establishing robust, transparent, and accessible AI governance frameworks Companies need to move beyond vague principles by developing clear, actionable policies that address environmental impact, societal implications, data privacy, fairness, and accountability. Critically, these policies must be accessible to all employees, and their acknowledgement should be a requirement. Training and continuous education are paramount in order to embed these principles into daily operations.
      3. Proactively disclosing AI governance practices Companies should seek to anticipate investors’ concerns by incorporating specific disclosures on AI oversight mechanisms, transparency measures (including environmental and risk assessments), and how they’re preparing for evolving regulatory landscapes. Companies that showcase their commitment to responsible A as a strategic advantage will gain stakeholder trust.
      4. Embracing industry standards and collaboration — By using global frameworks, such as the UNESCO global standard on AI ethics (which grounds the AICDI’s work), companies can strengthen standardization efforts. They should also participate in collaborative efforts and industry forums to share best practices and collectively raise the bar for responsible AI.
      5. Comparing your performance with peers —Companies can benchmark their responses against sector and regional peers. Also, they need to identify leaders and laggards to understand where a company stands and where it needs to improve. AI is an evolving field, and therefore, corporate AI governance frameworks must evolve as well — and the key ingredient for this is responsible innovation.

By any measure, AI is transforming our world; however, its benefits will only be fully realized if companies prioritize their responsible governance. For investors, AI governance is fast becoming a material risk and opportunity. And for companies, it’s no longer an option but rather a strategic imperative that can go a long way toward building trust, mitigating risks, and securing a sustainable future.


You can learn more about the Thomson Reuters Foundation, the corporate foundation of Thomson Reuters, here

More insights