Skip to content
AI & Future Technologies

Safeguarding agentic AI: Why autonomy demands governance and security

Sanjay Bhakta  Global Head of Solutions / Centific

· 7 minute read

Sanjay Bhakta  Global Head of Solutions / Centific

· 7 minute read

As AI agents gain autonomy to make decisions and take actions independently, robust governance and cybersecurity frameworks are essential to ensure they remain transparent, accountable, ethical, and secure

Key takeaways:

      • Agentic AI poses unique risks — Agentic AI systems, which operate autonomously and make independent decisions, introduce unique risks such as unpredictability, loss of human control, and ethical concerns, making robust governance and cybersecurity essential.

      • Governance as central tenant — Establishing actionable governance frameworks — built on principles like transparency, accountability, fairness, and safety — is essential to guide agentic AI systems and ensure their alignment with human values and legal requirements.

      • New attack vectors make cybersecurity critical — Agentic AI introduces new attack vectors, making cybersecurity measures like zero-trust architecture, adversarial testing, and strict identity management critical to protecting agentic AI from manipulation and attacks.


Agentic AI — AI systems endowed with agency — already are transforming how businesses and governments operate. These are not just static programs following if/then rules; rather, they are autonomous, goal-driven AI agents that can make decisions, initiate actions, and learn from feedback with minimal human intervention. Agentic AI is among the top strategic tech trends of 2025, and by 2028 up to 15% of daily work decisions could be made autonomously by AI agents, according to Gartner.

This promise, however, comes with unprecedented risks.

When an AI can act on its own, governance and cybersecurity are not optional, they’re imperative. Technical leaders, executives, and policymakers must ensure these agents are transparent, accountable, ethical, and secure.

New risks in the era of autonomous AI

Unlike traditional AI, which is often reactive and narrowly task-specific, agentic AI is proactive and adaptive. These agents can set goals, devise plans, and execute tasks without constant human prompts. In essence, an agentic AI doesn’t just answer questions or classify data, it acts. For example, a conventional AI might flag a fraudulent transaction for review; an agentic AI could autonomously freeze the account, initiate an investigation, and alert security teams.

Giving AI agents more autonomy is like giving AI itself a longer leash. It increases the technology’s utility but also increases the potential for unexpected or undesired behavior by these agents. Indeed, agentic AI introduces unique risks arising from its autonomy, decision-making power, and unpredictability, such as:

      • Opaque decision processes — By design, an agentic AI can generate its own strategies to meet goals, sometimes in ways even its developers didn’t explicitly program. This can lead to a lack of transparency.
      • Unpredictability and emergent behavior — Agentic systems can behave in unexpected ways when facing novel situations. Because they adapt in real-time, their actions aren’t fully predictable ahead of time.
      • Loss of human control — As agents operate with less oversight, intervention becomes more difficult.
      • Ethical and bias risks — With agents making decisions that affect humans, bias and ethical blind spots can translate into real harm. Agentic AI can unintentionally amplify societal biases present in training data or pursue its goals in ways that conflict with ethical norms.

Governance and cybersecurity in an autonomous world

Given that agentic AI can impact real lives and critical operations, it becomes essential to put into place strong AI governance frameworks to align these systems with human values, organizational policies, and legal requirements. Governance means transparency, accountability, and ethical alignment as systems evolve and begin to act autonomously.

Pragmatic, robust governance includes technology tools, organizational policies, and the  establishment of an AI governance committee. Governance frameworks should be established on a bedrock of common cybersecurity standards such as NIST AI RMF, ISO 42001, and ISO 22989; aligned with intergovernmental Organisation for Economic Co-operation and Development member principles; and in compliance with respective regulatory requirements such as the California AI Transparency Act, the Colorado AI Act, the European Union’s AI Act, and many others.

Along with ethical governance, cybersecurity is the other pillar that must reinforce agentic AI systems. In fact, security and governance are deeply intertwined because a lapse in one can undermine the other. Agentic AI introduces novel cybersecurity challenges that organizations and public agencies must address head-on, including:

      • Expanded attack surface — An AI agent with wide-ranging autonomy effectively becomes a new digital actor in your environment — one that can be targeted. Threat actors might attempt to hijack or manipulate an agentic AI to do their bidding, and this could involve feeding malicious inputs to trick the AI’s decisions, exploiting vulnerabilities in the AI’s code, or even using one AI agent to infiltrate another, known as cross-agent attacks.
      • Adversarial manipulation — Because agents can self-execute complex sequences, a single successful manipulation could cause a chain reaction. For instance, an attacker might trick an autonomous IT helpdesk agent into believing a fake user request. The agent might then grant elevated system privileges to the attacker or shut down a service, all on its own.
      • Zero-trust architecture for AI — Each AI agent should continuously authenticate itself and be verified for appropriate behavior. Micro-segmentation, in which each agent or service operates in a minimal trust zone, can prevent a compromised agent from freely roaming the network. Techniques like code signing and environment attestation ensure the agent is exactly the vetted version and hasn’t been tampered with.
      • Adversarial training — Adversarial training, which is training AI on known attack patterns, so it learns to resist them, making agents more robust. Additionally, adopting practices in which security experts actively attempt to attack the AI system in a controlled setting to find weaknesses also is useful.

Recommendations for organizational leaders and policymakers

Agentic AI has huge potential for public good, from smart city systems that optimize traffic and energy, to autonomous assistants that improve government services. However, public sector leaders must proceed deliberately, enabling strong governance and security measures from the outset. Some suggested best practices around governance and security include:

      1. Establishing actionable governance frameworks — This includes codifying principles like transparency, accountability, fairness, and safety into the AI model.
      2. Fostering trustworthy AI training and culture — Public sector organizations should train their staff and leadership on AI ethics and governance. Cultivating a culture in which raising concerns about an AI’s behavior is encouraged should be balanced with ethical hackathons.
      3. Ensuring human-in-command oversight — Especially in public sector use-cases that affect citizens’ rights or safety, retaining human oversight is critical. Establish a process to regularly review external feedback and compare the results with the cases in which shifts in the data or the model make estimates so uncertain that they exceed approved risk limits.
      4. Strengthening cybersecurity posture — Identity and access management for AI agents is critical, and each agent should have unique credentials, minimal access rights, and all actions should be logged. Adopt a zero-trust approach: No AI agent or component is inherently trusted, even if inside the network. Use techniques like micro-segmentation to isolate AI components or mitigate compromised situations.
      5. Conducting adversarial testing — Before and during deployment of agentic AI, perform rigorous security testing. This means actively trying to compromise the AI models by simulating attacks or inputting corrupted data.

Agentic AI systems represent a profound shift. They can automate complex decisions and actions at a scope and scale previously unimaginable, driving efficiency and innovation in both the public and private sectors. However, with greater autonomy comes greater responsibility. The imperative for AI governance and cybersecurity in agentic AI cannot be overstated: They are the tale of two cities for trust.

For technical leaders and policymakers, the task is clear. Governance provides the ethical compass and steering wheel for AI, ensuring these powerful agents remain accountable, transparent, and aligned with our values and laws. Cybersecurity builds the safety belt and air bags, protecting AI systems from malicious actors and failures that could derail their purpose. Together, they create the conditions for agentic AI to thrive safely and deliver the benefits of autonomy without courting chaos.


You can find out more about the challenges and benefits of using agentic AI here

More insights