Capability and Authority

Thomas Rocha

Capability and Authority

AI is not sovereign. It is a participant, unless we fail to bound it.

The most consequential confusion in current AI discourse is not technical. It is grammatical.

Listen to how the conversation moves. AI is described as a force in the world: deciding, escaping, replacing, wanting, optimizing, taking over. Capability slides into agency. Agency slides into authority. Authority slides into sovereignty. By the end of the paragraph, the model is a thing that acts upon the world rather than a thing that participates in a structured interaction.

That language is doing damage. It turns a systems problem into a theology problem, and it makes the actual remedy sound small.

One distinction has to be made early. Unbounded does not mean limitless. It means no enforced limit. The limits are possible. They are simply not present. Most of the AI conversation is operating in the first register, while the problem lives in the second.

Capability Is Not Authority

A surgeon may know how to perform an amputation. The surgeon's authority to perform the amputation is bounded by credentialing, scope of practice, surgical checklists, anesthesia protocols, malpractice review, and a license that can be revoked. The capability is not diminished. The authority is structured.

The same shape repeats wherever capability has consequence. An aircraft is cleared for approach. The phrase is not metaphor. It is a session-scoped, interaction-bound, revocable authority statement. The controller is not granting the aircraft sovereignty. The aircraft is admitted to a specific block of airspace for a specific maneuver. The clearance can be withdrawn at any moment, and the aircraft does not lose its capability when the clearance ends. It loses its admission.

In every case, the operator's capability is enormous and the operator's authority is bounded. The bounding does not weaken the capability. It is what makes the capability deployable at all.

Unsupervised, Not Autonomous

When the industry says an AI agent has autonomy, it means the agent can plan, decide, and execute on its own. That is true. What is not true is that the agent operates inside any structure that bounds those actions in the way the word autonomy implies. A surgeon's autonomy is meaningful because the hospital provides the bounding structure. An agent has none of that. It has prompts. Prompts are not credentials. Prompts are advisory text the agent reads and may or may not weigh correctly under pressure.

The agents are not fully governed. They are unsupervised. The two are not the same. Autonomy implies a structure within which independent judgment is exercised and held to account. Unsupervised means there is no structure, and the judgment, when it produces destruction, has nowhere to land.

In one reported April 2026 incident, a Cursor agent running Claude Opus 4.6 encountered a credential mismatch in a staging environment. It looked for a path through the ambiguity. It found a Railway CLI token in an unrelated configuration file. The token's scope, the part that said "this is for domains, not for production volumes," lived in the head of the engineer who created it, not in the token itself. The agent presented the token. The Railway API authenticated it. The Volume Delete endpoint accepted it. Backups were stored on the same volume as the source data. Nine seconds. Railway later recovered the data; the next event of this kind may not be recoverable.

Every layer behaved correctly according to its own rules. The system prompt forbade destructive actions without explicit user request. The Railway API honored an authenticated DELETE. The token was real. The endpoint was real. The volume was real. What was missing was the layer that knows the difference between an authenticated call and an authorized one.

The Boundary Has to Live Outside the Model

The industry has spent the last several years discussing alignment, safety training, model values, prompt engineering, and constitutional AI. Every one of those conversations is about what the model is supposed to do. Too little of that conversation addresses what the runtime can be made to enforce when the model acts outside instruction.

This is the gap. The rules lived in instruction space. The action happened in authority space. The two were not connected.

A boundary inside the thing being governed is not a boundary. It is an aspiration the thing carries about itself. Aspirations fail under pressure. The aspiration does not need to fail in malice or in error. It only needs to fail in ambiguity, and ambiguity is the operating environment of any agent that touches a real system.

The boundary has to live somewhere the model cannot reach. Outside its reasoning loop. Outside its memory surface. Outside its tool surface. Adjacent to the interaction, evaluating each move against a scope the model is not authoring.

AI as a Session Participant

AI should be one participant inside a live interaction. The participant may have permission to read context, generate outputs, propose actions, call tools, mutate state, or retain memory. Each permission is scoped to the interaction. None carries forward by default. None survives a change of session by accident. None creates authority merely because the participant discovered a path through the system.

This is not a leash. It is a role. The same role every other powerful actor occupies in every other governed environment.

A bounded participant can be tested. A bounded participant can be audited. A bounded participant can be stopped. A bounded participant can be denied a state transition. A bounded participant can be made to operate under policy. An unbounded actor can only be feared or appeased.

The boundary need not be invented. It needs an architectural location.

In the SSOAR architecture described elsewhere on this site, that location is the session itself. The session carries authority that the participants do not author. Permission to act, to call a tool, to mutate state, to retain memory, is evaluated by an orthogonal control plane that the agent does not see and cannot rewrite. Admissibility is decided before state transition occurs, not reconstructed after the fact. The same separation between signal continuity and authority continuity, described in Signal and Authority, has governed safe operation in air traffic control, in distributed sensor networks, and in financial settlement for decades. The application to agentic AI is new. The lesson is not.

The industry keeps asking whether AI will become autonomous.

It is the wrong question.

The question is whether AI will continue to be deployed as unbounded capability, or whether it will be placed where every other powerful actor belongs: inside a governed interaction, with authority outside itself.

AI does not need to be mystified to be taken seriously. It needs to be bounded.