The Delegation Illusion

I asked my AI assistant to sort my email and hand them over to new director of Vrienden van de Alk. It archived 200 messages, declined three meetings, and replied to an email with a question about project status with a synthesized summary that was technically accurate and contextually completely wrong. It had processed my inbox. It had answered on my behalf. It had made judgment calls about what mattered and what didn’t.

I had delegated a task…

I had not delegated the authority to make those calls. Nobody had told the assistant the difference. Nobody had told me the difference either, until I got back and found out what it had decided.

Delegation is not what we think it is

Or what I thought it was. It’s a word we use very loosely in management and now even more loosely in technology. Delegation.

When I delegate something to a someone, three things needs to transfer: the task, the authority to act on it, and a negotiated share of the accountability for the outcome. The third one is the important part. Real delegation means the other party owns the result. They make the calls. If it goes wrong, they answer for it. Not entirely true, of course the manager retains responsibility for the decision to delegate. But there is a genuine transfer. The person I delegated to can say “I decided this” and mean it.

That is not what happens when you deploy an AI agent to handle a workflow. What happens instead is something I would call deferral. The task moves. The authority never truly transfers, because the system has no way for it to hold it. And the accountability stays exactly where it was, with me or you, quietly waiting for the moment something goes wrong.

The moment something goes wrong

That moment always comes. Not always catastrophically. Sometimes it’s my email inbox. Sometimes it’s an automated contract renewal that went through when the relationship had changed. Sometimes it’s a support agent that resolved the complaint by refunding the wrong amount, or a procurement system that approved a vendor relationship the compliance team would have flagged.

The AI handled it. That’s the phrase people use. And when it works, nobody questions it. When it doesn’t, the sentence suddenly needs a second half. The AI handled it, but…

But who reviewed the decision? But who signed off on that response? But where was the human in the loop? It is very similar like talking with a person that can only state “computer says no”.

Rarely does “the AI handled it” survive as a complete sentence once the incident review starts.

The design question nobody is asking

Most of the conversation about agentic AI is happening at the task layer. Can the agent do the thing? How do we prompt it correctly? How do we give it the right tools? These are not unimportant questions. But they are the wrong starting point.

The question that matters to me is: at what point in your agentic system does accountability actually transfer?

For a human team, you can answer that. You know who signed off. You know who had the authority to approve. You know who was responsible for the outcome and can be held to it. The org chart exists, the role descriptions exist, the decision log exists.

For most agentic workflows, the honest answer is: it doesn’t. Accountability doesn’t transfer. It pools somewhere in the architecture, usually at the boundary between the automated system and the human who set it up, and sits there, invisible, until something surfaces it.

I asked a CTO recently where accountability sat. Long pause. “With the team that built it, I suppose.” Not with the system. Not distributed. Concentrated, at the point of least visibility.

The liability that didn’t go anywhere

Peter Drucker was blunt about delegation: “Responsibility can be shared, but accountability cannot be shared.” The person who delegates remains accountable. That was true for managers. It is even more true for AI systems, because the agent has no accountability to hold.

This is not a theoretical concern. It is a practical design constraint that most organizations are not building for.

If your AI agent sends an email on your behalf, it was your email. If your automated system declines a vendor, it was your decision. If your agentic workflow approves a specification and it ships with a flaw, the sign-off was yours. The system can carry the task. It cannot carry the weight of having chosen.

(None of this is the AI’s fault, by the way. Blaming the assistant for answering my email would be like blaming the taxi for taking the wrong route after I fell asleep. I told it to drive. I didn’t tell it where.)

What this means for the CTO reading this

Every AI implementation that “handles the workflow” still has you as the accountable party for every decision it makes.

That is not a reason to stop. It is a reason to design differently. Which decisions actually need to transfer? Which ones look like automation but are actually judgment? Where and what are the thresholds below which deferral is safe and above which you need a human who can own the call?

Most current AI deployments cannot answer these questions because nobody asked them at design time. The agent was built to do the task. The accountability architecture was never drawn.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.