Jan 12, 2026 |

Building Trust in Agentic AI

By Frank Schilder, Senior Principal Scientist

From Principles to Practice 

As artificial intelligence systems become more agentic and capable of multi-step reasoning and autonomous action, the question of trust is shifting from theory to practice. These challenges are the focus of the Trust in AI Alliance, a new forum convened by Thomson Reuters Labs that brings together leading AI researchers and engineers from across industry and academia to define shared approaches for building trustworthy agentic AI systems. 

Learn more about the Trust in AI Alliance  → [Thomson Reuters Convenes Global AI Leaders to Advance Trust in the Age of Intelligent Systems]

This post introduces the first technical topic the Alliance will explore, what it actually takes to engineer trust into AI systems that operate in high-stakes professional environments. 

“Our customers do not use AI for experimentation,” said Joel Hron, Chief Technology Officer at Thomson Reuters. “They use it to make decisions they must be able to explain, defend, and stand behind. As AI systems become more agentic, trust stops being a policy question and becomes an engineering requirement.” 

 

From Trust Principles to System Design 

Thomson Reuters has long operated under the Trust Principles that includes the core values of independence, integrity, and freedom from bias. These values now extend into our AI & Data Ethics Principles, which emphasize fairness, transparency, reliability, and meaningful human involvement. Together, they form the foundation for how we design and deploy AI in our products used in high-stakes professional environments. 

Together, these principles shape how agentic AI systems are constrained, audited, and integrated into professional workflows, not as black boxes, but as accountable collaborators. 

 

When Professionals Ask: Can I Trust This? 

Consider a tax professional using an AI platform to research a complex compliance question. The system may leverage AI to reason across statutes, regulatory guidance, and material that requires some legal interpretation, dramatically accelerating the research process. But speed is not the hard part. The harder question is whether the professional can: 

Understand which sources informed the answer

Verify that those sources are authoritative and unchanged

• Trace how the system arrived at its conclusion

• Know where and when human judgment must intervene 

That tension between autonomy and accountability is exactly what defines trust in agentic AI. 

 

Three Technical Challenges That Define Trust 

As part of its initial work, the Trust in AI Alliance will focus on three foundational challenges that determine whether agentic systems deserve professional trust. 

  • Context Integrity: Can the system preserve all critical decision criteria when AI models compress or segment information? 
  • Immutable Provenance: How do we guarantee that cited source text remains unchanged and auditable? 
  • Security Against Adversarial Prompts: How do we protect workflows from malicious inputs without compromising usability? 

 

Why This Matters 

Agentic AI promises meaningful gains in productivity and insight. But greater autonomy also increases risk. Without clear guardrails, transparency, and accountability, trust erodes and adoption suffers. By convening the Trust in AI Alliance, Thomson Reuters Labs is creating space for candid, technical discussion about how trust can be engineered, not assumed, as AI systems evolve. The goal is not to slow innovation, but to ensure that capability and responsibility advance together. 

As the Alliance’s work begins, one principle anchors every conversation. Trust must be designed into agentic systems from the start and never added after the fact.  

 

Looking Ahead 

The upcoming Trust in AI Alliance workshop will explore these challenges and opportunities. Our goal is to ensure that tomorrow’s agentic systems earn trust every step of the way. 

Join us as we frame the conversation and set the stage for collaborative solutions. Learn more about our commitments:

Thomson Reuters AI Principles | Thomson Reuters Trust Principles 

Share