Separation of Duties Applies to AI, Too
Separation of duties is not a new concept. The engineer who writes the change is not the engineer who approves it. The developer with write access is not the auditor who reviews the logs. The person submitting the purchase request is not the person approving the payment. These splits are the backbone of control frameworks that have been in place for decades, in every industry that has ever dealt with the consequences of unsupervised authority.
The rush to adopt AI has, in many organizations, quietly erased the rule at the boundary where it matters most.
The default AI-assisted workflow
The default pattern I see in teams adopting AI for engineering work is this: an engineer opens a chat session with a model, describes the task, receives a diff, applies the diff, commits, pushes, and merges their own PR. The agent is treated as a personal productivity tool, a faster way for that engineer to do the work they would have done by hand.
This model works when the engineer remains in the loop as the actual author and reviewer of the change — when the AI is a keyboard accelerant, not a distinct actor. It stops working when the AI is doing substantive work that the engineer is no longer fully rereading.
At that point the engineer has become the dispatcher of an agent, not the author of the change. And the dispatcher approving the dispatcher's own agent's work is a violation of separation of duties in any governance framework that takes separation of duties seriously.
Why this matters
If the agent makes a mistake — drops an error handler, introduces a subtle logic bug, adds a dependency that should not have been added, silently overreaches the intended scope — the reviewer is the last line of defense. When the reviewer is the same person who kicked off the agent session, three failure modes collapse into one:
- The dispatcher did not notice the mistake when scoping the task.
- The dispatcher did not notice the mistake in the agent's output.
- The dispatcher did not notice the mistake in their own review.
One attentive human. Three chances to catch the error. All serialized through the same attention. The control the review step was supposed to provide is not there.
The rule we enforce
Our rule is blunt: the persona that submitted the work does not approve the work. If Code dispatches a session and produces a PR, the PR is approved by a different persona — the project's PM, or a distinct reviewer session. The reviewer is looking at the PR, the prompt file, the session log, and the ticket, and deciding whether to approve. The submitter does not have self-approve authority.
This adds friction. It is meant to. The friction is the control.
When we have skipped this step — usually for "small" changes or "obvious" improvements — the bad outcomes have always been in that category of things nobody thought needed a second set of eyes. That is the nature of the failures separation of duties is designed to catch. The ones nobody thought needed catching.
What to actually require
The minimum to require in any AI-assisted engineering workflow:
- The agent's output is reviewed by a human who did not dispatch the agent.
- That reviewer has access to the prompt file and the session log, not only the diff.
- The reviewer's approval is recorded structurally — not as a thumbs-up in a chat but as a reviewer decision in the system of record that carries the change.
- The merge is gated on the reviewer's approval.
None of this is exotic. It is what your engineering process was probably doing already, for human authors, before AI arrived and rewrote the default workflow around a single-operator chat window.
Restoring the separation is not reversing progress. It is making sure the progress is defensible.