Jul 25, 2025 |

AI Systems Are Only as Secure as Their Supply Chains

Thomson Reuters joins the Coalition for Secure AI (CoSAI) to help define the next frontier in AI risk management

Kirsty Roth  Chief Operations and Technology Officer, Thomson Reuters

As artificial intelligence rapidly reshapes how industries operate, the conversation around AI security is shifting—fast. 

Thomson Reuters is proud to join the Coalition for Secure AI (CoSAI)—a cross-industry initiative dedicated to advancing security standards for AI systems. Together with organizations including Google, Microsoft, IBM, NVIDIA, Dell Technologies, and PayPal, we’ve contributed to a newly released white paper: Establish Risks and Controls for the AI Supply Chain (v1.0) 

This work outlines a powerful truth: AI systems are not like traditional software.
Their attack surfaces include poisoned training data, tampered model weights, insecure plugin ecosystems, and compromised inference infrastructure. 

Our Thomson Reuters Contributors 

We’d like to recognize several of our security and AI leaders at Thomson Reuters who helped shape this industry-wide framework: 

  • Yassine Ilmi – Director, Product Security 
  • Arbër Salihi – Lead Product Security Engineer 
  • Lorenzo Verstraeten – Manager, Responsible AI Technology 
  • Danilo Tommasina – Distinguished Engineer, Labs 
  • Ramdev Wudali – Distinguished Engineer, Core AI & Data Platforms 

Their expertise reflects our commitment to secure-by-design practices and our belief that AI innovation must go hand in hand with transparency, accountability, and governance. 

Read the blog and full paper here. 

This work was developed by the Coalition for Secure AI (CoSAI), with contributions from security, engineering, and research teams across leading organizations. 

Share