May 13, 2026 |

Thomson Reuters Standard for High Stakes AI

As AI moves from experimentation into everyday professional use, a higher standard is required.  

Not all AI is used the same way, and it cannot be held to a single standard. AI used in contexts carrying professional liability must meet a higher standard than general productivity tools. Where outputs influence legal judgments, financial disclosures, regulatory filings, or client advice, “almost right” simply is not good enough. In the moments that matter, results must be accurate, transparent and verifiable under real-world scrutiny.  

Fiduciarygrade AI is Thomson Reuters standard for how AI should work in highstakes professions. It is the minimum requirement for AI that supports professionals operating under enforceable duties of care and regulatory oversight, whether in the court room or the board room. 

Almost Right is Not Good Enough 

Before professionals operating in fields that require zero-margin-for-error can fully embrace deeper AI integration into their everyday workflows, they need to know that the AI they are using stands up to scrutiny and that its outputs can be verified.  

For generations, professional trust has been defined by standards, certification, and fiduciary duty. When someone carries a designation like CPA or attorney, we understand both their qualifications and the obligations that govern how they must act. 

That same standard must now be applied to AI used in professional work. If we expect AI to start to take on more meaningful shares of human time, then as we assess a human’s fitness for purpose for a job, we must also validate AIs fitness for purpose.  

High stakes professional work requires a different standard 

In regulated professions that prioritize accuracy, accountability, and trust, AI must be built to a fiduciary-grade standard. That means real, factual, authoritative sources, traceable reasoning, and transparent outputs that are ready for human review and verification under professional and regulatory expectations.  

As AI takes on more responsibility in completing professional work, it does not assume any additional accountability. That accountability remains entirely human. Professionals remain responsible for the judgments made, the advice delivered, and the outcomes that follow. Fiduciary Grade AI is designed to support human judgment, not replace it, by producing work that can be examined, explained, and defended under real-world professional and regulatory scrutiny. 

The Five Principles of Fiduciary-Grade AI 

Fiduciarygrade AI is defined not just by what it produces, but by what it is allowed to access, retain, and rely upon in generating outputs that inform professional judgment 

  •  AI grounded in authority  

A fiduciary-grade AI system must derive its substantive outputs from authoritative, curated, and domain-specific content — not from general-purpose information scraped from the open internet. Every material output must be traceable to a source that a qualified professional can independently locate, cite and verify. 

  •  Data privacy and security are imperative.  

A fiduciary-grade AI system must treat all user-submitted data, including organizational data, as confidential by design. Privacy and security must be structural features of the system’s architecture, not policy overlays or configurable options.  

  •  Built with human expertise, not just human oversight.  

The skills and capabilities that power professional work must be designed, tested, and continuously refined with meaningful involvement from credentialed subject matter experts in the relevant professional domain. This ensures AI reflects true expertise, not approximation. When ambiguity or risk arises, the system must recognize its limits and bring professionals back in rather than generating an output that overstates its reliability —keeping accountability human and outcomes defensible. 

  •  Access to the right context for professional work.  

In addition to authoritative content, AI must operate with the full context required to complete professional work. Only when agents can access, know and act on specific knowledge sources, systems, tools, and work products that professionals rely on can they complete complex, multi step tasks.  

  •  Transparent, verifiable reasoning 

By clearly surfacing and referencing the sources it relies on, AI must be able to provide a reviewable trail of what the system did and what it relied on, sufficient to allow a qualified professional — and, where applicable, a regulator, court, or auditor — to evaluate the basis for the output and determine whether the result is reliable and defensible. 

 This is the standard we build to at Thomson Reuters, and the standard delivered through CoCounsel for legal, tax, audit, and compliance professionals. As AI moves deeper into regulated work, the defining question is no longer whether a system can generate an answer – it’s whether professionals can verify and stand behind the result. 

Share