March 21, 2026

Autonomous AI agents determine their course of action before enacting workflows, transforming how organizations operate and unlocking efficiencies and capabilities that were previously impossible. However, achieving this potential also poses a risk that autonomous actions may lead to legal, commercial, or ethical consequences without any human being held responsible. To appreciate the full value of agentic AI, we must ensure that the verified humans who delegate authority to agents remain meaningfully connected to the actions taken on their behalf.

Every organization deploying agentic AI today is, knowingly or not, creating an accountability vacuum, a space where decisions are made, commitments are entered into, and liabilities are incurred without any specific human authorization. This is not a hypothetical risk but an architectural one, reinforced by automated and delegated credentials that identity systems were never designed to scrutinize, turning a philosophical question, whether the ends justify the means, into an operational reality.

Autonomous AI agents are now executing supplier negotiations, approving invoices, modifying payment terms, and initiating workflows across enterprise systems, doing so with valid credentials through legitimate APIs, following protocols designed with one assumption baked in: that the entity on the other end of a transaction is a human being. 

The urgency is not confined to the private sector. The recent U.S. National Cyber Strategy has explicitly committed to rapidly adopting agentic AI to scale network defense and modernize critical systems. When governments are accelerating agentic AI deployment at the national scale, the governance infrastructure required to do so safely cannot remain an afterthought.

Our legacy identity infrastructure assumes that an acting entity is human. FIDO2, which the industry rightly considers the strongest authentication standard available, was designed around a model in which a human being initiates, approves, and completes the authentication ceremony. The entire trust chain begins and ends with a real person. One-time passcodes (OTPs) assume a person is reading the SMS. Push notifications assume a person is tapping a screen. These are not flaws. They are features built for a world in which machines execute deterministic processes set in motion by humans. They verify identity and permission. What they cannot verify is intent: whether a human with the authority to make this specific decision actually chose to make it.

The industry has grappled with non-human identities for years. Service accounts and automated pipelines have long operated using credentials, with no human touch, in real time. But AI agents are qualitatively different.

A traditional automated process executes a fixed script. Its behavior is deterministic and bounded. An AI agent pursues a goal. It selects from possible approaches and takes actions that may never have been explicitly anticipated by the humans who deployed it. This shifts the delegation relationship up one entire order of magnitude. We are no longer delegating tasks to machines. We are delegating judgment. And delegating judgment, without a mechanism to verify that a human with appropriate authority endorsed the specific exercise of that judgment, is not governance. It is gambling.

In traditional transactions, if a CEO’s signature is on a contract, their organization bears the burden of proving it was forged. Agentic transactions invert this. When an agent acts on valid credentials but without proof of human intent, the delegating organization can plausibly claim the agent met its brief. The delegator needs confidence that their agent will not expose them to liability, and the counterparty needs assurance that commitments received cannot be disowned.

The EU AI Act and emerging case law are converging on a shared principle: that human oversight of consequential AI action must be meaningful, not ceremonial. NIST’s February 2026 concept paper on agent identity identified six focus areas: identification, authentication, access delegation, authorization, logging, and human-in-the-loop binding. We believe the focus should be on the latter. Human-in-the-loop means not only that a human sets the ends, but also approves any risky or unexpected means to achieve them.

Effective human authorization of agentic action requires three things that current enterprise deployments almost never provide together: the right person, whose organizational role makes them the appropriate authority for this class of decision; full information, sufficient context to make a genuine decision rather than a one-click approval that is the illusion of oversight; and an attributable record, timestamped, and tied to a verified human identity. These requirements are the basic elements of how consequential decisions have always been made in well-governed organizations.

This is a balance between two opposing challenges. On the one hand, we need to close the gap between the intent and the decisions enacted to realize it. Yet we must avoid approval fatigue and create space for this to truly be a decision of thought, care, and responsibility, which only humans can carry.

The organizations that will capture the full value of agentic AI are not those with the most capable agents but those that build the governance infrastructure to deploy agents at scale, knowing who authorized what, when, and why. 

The instinct of many organizations will be to solve the governance challenge by coding ethics into the agentic AI framework. But ethics encoded in a model is a policy. Ethics exercised by a person is accountability. Human merit should be judged by actual behavior under real-world conditions, not by compliance with a system someone configured before the fact. The goal of governance is not to eliminate the human judgment call. It is to ensure that a human actually made one, that we know who did, and that they can answer for it.

 At the RSA Conference in San Francisco next week (Tuesday, 24th March 2026), my iProov colleague, Johan Sellström, will demonstrate how these objectives can be accomplished. His demos and presentation highlight some of the innovative work the team has been undertaking to tie the actions of AI agents to human authority.  

Furthermore, we recently published a consumer research survey entitled “The Great Trust Recession, Driven by Deepfakes.” If we don’t take on this task, I predict, on the enterprise side, “The Great Trust Recession, Driven by Our Own AI Agents.”

Andrew Bud, CBE FREng FIET, is Founder and CEO of iProov. He has spent three decades working at the intersection of identity, security, and emerging technology.