Did anyone brief the Agent?

In January 2024, a DPD chatbot told a customer, in writing, that DPD was the worst delivery company in the world. The customer had asked it to. The chatbot agreed, enthusiastically, and then wrote a poem about it.

Nobody had run it through media training

That is the sentence that should haunt every technology leader deploying a generative AI agent right now. Not “nobody had tested it” Not “nobody had tuned the model” Those things matter too. But the DPD chatbot didn’t fail a test. It failed a press conference. And there is a discipline for that, and it isn’t called quality assurance.

Your Newest Spokesperson

Professional sports teams spend serious money on media training. Not because their athletes are stupid. Because a player in front of a microphone after a loss, or a win, or a controversial decision, is the brand. Whatever they say, that’s what people will think is the the team talking. Whatever they feel in that moment, the organization owns it. One sentence, badly chosen, becomes the headline. The journalists are fishing for it. One authentic moment of frustration on camera is a week of crisis management.

So teams train for it. Topic restriction: here is what you can comment on, and here is what you refer to the coach. Escalation: when in doubt, say nothing. Tone calibration: you can be honest, but you can be honest in twenty different ways with twenty different consequences. And scenario rehearsal: let’s practice what happens when someone asks you the thing you least want to answer.

Your AI agent is doing press conferences. Every single day. At scale. With every customer who types a question into your support portal, your sales assistant, your internal helpdesk. It is speaking on behalf of your organization, in your name, in your brand’s voice, to potentially millions of interactions.

Did you send it to media training?

Salesforce AgentForce has topic guardrails. You can configure what the agent is and isn’t allowed to discuss. Most implementations I have seen set these up defensively, the equivalent of telling a new employee “don’t talk about the lawsuits” Technically correct.

Media training is not a list of forbidden topics. It is a coherent philosophy of representation. The athlete knows why certain topics are off-limits, not just that they are. They understand what the organization stands for well enough to improvise within it. They know when to escalate and when to trust their own judgment. They know how to say “I don’t know” in a way that doesn’t become a story.

The configuration screen doesn’t ask you any of that. It asks you for keywords and categories. That is the media training equivalent of handing someone a list of banned words and calling it preparation.

(AgentForce is one example. The same observation applies to every agent builder currently in production. The tooling has matured faster than the thinking about what you’re actually deploying.)

The thing about going off script

The best press conference moments are sometimes the off-script ones. The athlete who says something real. The coach who drops the prepared answer and just tells you what they actually think. Those moments build brands, not just protect them.

Agents that are scripted too tightly feel like it. Not sure if that comes across. What I want to say is that everyone who has ever interacted with one of these heavily constrained chatbot knows the feeling. The system has an answer for everything and a genuine response for nothing. You can feel the guardrails. It is less reassuring than talking to a person who doesn’t know the answer, and more frustrating, because at least the person would admit it. Well, probably.

The question is not how to prevent your agent from ever improvising. The question is where you are comfortable with the agent’s judgment, and where you need a human who can own the call. That is a design question. Most teams are not asking it. They are asking “can it do the task?” and treating governance as something that happens afterward, if at all.

Who wasn’t in the room

Think back to when you deployed your last customer-facing AI agent. Who was in the room?

IT, almost certainly. Legal, hopefully. Security, probably. Someone from the product team who owned the use case. Was your communications lead there? Your brand manager? Anyone from the team that normally thinks about how your organization sounds to the outside world?

The agent is talking to customers. That has always been their territory. The language, the tone, the judgment calls about what to say and what not to say in difficult moments. All of it used to go through people who had context for it. Now it goes through a system that was configured by a bunch of engineers who were solving a different problem.

That is not a criticism of the engineers. They did what they were asked. The gap is that nobody asked the right people.

The brief(ing)

Here is what I would put in a proper agent brief, borrowing directly from the media training playbook.

What topics can this agent speak to authoritatively? Not “what is it technically capable of discussing” but where is it actually qualified to represent us? Topic restriction from competence, not just compliance.

What does escalation look like? When should it stop, say so, and hand to a human? Escalation protocol is not a fallback. It is a feature. The athletes who handle the hardest questions best are the ones who know exactly when to say “talk to the coach”

Whose voice is this? Brand tone is not just phrasing guidelines. It is a judgment framework. Formal or direct? Empathetic or efficient? Confident or careful? The agent makes this call thousands of times a day. Did anyone answer it intentionally?

And then the scenario rehearsal. What happens when a customer is angry? When they ask something you’d rather not answer? When they are wrong about something important? Run those scenarios before they happen in production. Not to prepare a canned response. To find out where the judgment gaps are.

The Headline you don’t want

The DPD chatbot wrote a poem, and it went viral. The interesting part is that DPD fixed it quickly and the story faded. But for a few days, their AI agent was more famous than their service. And the coverage wasn’t about the technology failing. It was about the organization that deployed a system that could say whatever it wanted on their behalf, and apparently hadn’t thought about that until after the poem.

The question is not whether your agent can handle the workflow. It’s whether it can handle the press conference. And unlike a real press conference, this one runs twenty-four hours a day, in every language your customers speak.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.