Jan 08, 2026 | AI and product innovation
What CES Reinforced for Me About AI, Accountability, and Trust
CES is always a useful forcing function. It compresses hype, experimentation, ambition, and anxiety into a few days, and it makes it very clear where technology is moving faster than our ability to operationalize it responsibly.
This year, I participated in a panel focused on the real-world risks and rewards of deploying AI at scale. The conversation reinforced something I have been thinking about deeply over the past year: AI capability is accelerating quickly, but trust, accountability, and effective deployment are now the defining challenges.
AI Is Becoming an Operating System
One analogy I shared during the panel resonated with many in the room. Today’s AI feels similar to early operating systems before user-friendly interfaces existed. The capability is there. The power is there. But the tooling, context management, and guardrails that allow people to reliably get the outcomes they want are still immature.
As AI systems become more agentic, they increasingly resemble operating systems rather than point tools. They can orchestrate workflows, use multiple tools, and adapt plans dynamically to achieve goals. That shift is exciting, but it also raises the stakes. When systems move from task-based execution to goal-oriented autonomy, the question is no longer just “Can it do this?” but “Under what constraints should it do this, and who is accountable when it does?”
The Last Mile Is the Hard Part
One theme that came up repeatedly during the panel is that getting AI systems from 90 percent capability to 99 percent reliability is exponentially harder than the initial leap. Anyone building or deploying AI has seen this firsthand. Early successes come quickly. The final mile is where fragility shows up, where hallucinations emerge, and where trust can erode if systems are not designed carefully.
At Thomson Reuters, we see this clearly in professional domains like law and tax. These are not environments where “mostly right” is good enough. Reliability, transparency, and accountability are essential, which is why human-in-the-loop design remains critical. AI systems should accelerate expertise, not replace judgment.
The Real Risks Are Overconfidence and Under confidence
One of my biggest takeaways from CES is that organizations often underestimate risk in two opposite ways. On one side, some dismiss AI after an early failure, missing meaningful opportunity because they did not invest the time to learn how to use the technology effectively.
On the other, some move too quickly, placing unwarranted confidence in systems that are not yet designed with sufficient oversight or guardrails. Both paths create risks. Responsible deployment sits in the middle. It requires persistence, change management, and a willingness to train humans alongside machines. Many organizations are spending the majority of their effort training models, while spending comparatively little time helping people learn how to work with them. That imbalance showed up repeatedly in conversations at CES, and it is one we need to correct.
Accountability Still Rests with Humans
Another important point that surfaced during the panel is that AI does not absolve organizations of responsibility. Whether in legal research, healthcare, manufacturing, or customer service, AI systems operate within human-defined goals, constraints, and incentives. Accountability does not disappear because a system is automated. In many cases, existing laws already provide a framework for responsibility and liability. The work ahead is not just regulatory. It is about being explicit in how we design, deploy, and govern AI systems so that responsibility is clear and trust is earned.
Leaving CES With Clarity
I left CES more convinced than ever that the future of AI will be defined less by novelty and more by discipline. The organizations that succeed will be the ones that focus on outcomes, invest in human oversight, and design AI systems that are transparent about their limits.AI is a powerful tool. As it becomes more agentic, our responsibility increases, not decreases. The opportunity is enormous, but only if we build systems that people can trust to operate in the real world.
That, ultimately, was my biggest takeaway from CES this year.