When I joined BigTech(TM) more than a decade back, there was some noise around how they banned PowerPoint and wrote six-pagers. I didn’t think much of it back then but over time I keep seeing the same shape in unrelated places. The military’s Five Paragraph Order (SMEAC) dates to 1957. Healthcare’s SBAR started on nuclear submarines in the 1940s before Kaiser Permanente adopted it for hospitals. Barbara Minto developed the Pyramid Principle at McKinsey in the 1970s, and Mandel Communications later extended it into SCIPAB. None of these fields were reading each other’s work. They independently arrived at the same skeleton of context, problem, plan, coordination.

After being introduced to SCIPAB five years back, I’ve been using it for important emails and short docs and found it effective, but I hadn’t thought much about why until today when I got curious about SMEAC. SMEAC (Situation, Mission, Execution, Administration, Command) is structurally doing the same thing SCIPAB does, just without the persuasion layer. SBAR drops it too. They’re all protocols to convey 1/ here’s what’s happening, 2/ here’s what matters, 3/ here’s the plan, and 4/ here’s how we stay coordinated. The shared goal is collapsing the distance between reading and acting.

I think they converge because working memory is limited. Around four chunks. When someone sends an unstructured wall of text, the receiver burns those chunks figuring out what’s being asked. Structured formats front-load the parsing. Same reason I suspect Amazon went on to have API contracts where teams agree on the shape so both sides focus on content.

Ironically, the failure mode is nuance-driven bloat. For example D-Day Overlord OPORD was five pages but has 130 pages of appendices [via]. Modern operations orders for smaller operations run hundreds and I can confess creating 20~40 pages of appendices for “six-pagers” at Amazon. Marine doctrine had to explicitly warn against letting “your decision become lost in a series of paragraphs, subparagraphs, alpha-numerics, and acronyms.” The format designed to force clarity becomes the thing that obscures it. I picked those phrases deliberately because same pattern with agent system prompts where the structure that’s supposed to help becomes the monolith nobody can maintain.

The formats that survive tend to resist bloat through constraints. BLUF (Bottom Line Up Front) lasts because you can’t bloat a single sentence. The one piece of the Five Paragraph Order that crossed into mainstream business thinking, mostly through the Heath brothers’ Made to Stick, was Commander’s Intent. It’s defined as a single sentence describing what done looks like, clear enough that people can improvise when the plan breaks. It works because it fits in working memory and survives contact with reality. For example, Herb Kelleher’s entire Southwest Airlines strategy was “We are THE low-fare airline.” You can’t pad that without diluting or killing the core intent.

Ethan Mollick connected this to something I hadn’t considered in Management as AI Superpower:

This problem existed long before AI and is so universal that every field has invented their own paperwork to solve it…The reason you can use so many formats to instruct AI is that all of these are really the same thing: attempts to get what’s in one person’s head into someone else’s actions.

These are serialization formats for intent. The receiver used to be a soldier, a nurse, a junior consultant, an engineer on another team. Now it’s increasingly an agent. That works when the outcome is genuinely clear, but with hands-off automation we keep finding that the person delegating didn’t fully know their own intent. And while SCIPAB’s Implication and Benefit steps helped me surface whether I actually knew what I wanted; it was really designed to move the audience, not check the sender’s thinking. Many other protocols don’t even have that side effect. Commander’s Intent assumes the commander actually has clear intent.

This didn’t matter much because execution was slow. A junior consultant takes a week on the first pass; you catch drift in the check-in. An engineer raises concerns in the design review. The human receiver’s slowness was itself a kind of safeguard. Agents don’t push back, not yet anyways. They execute faster than you can verify, and what felt like clear intent wasn’t. By the time you notice, the output is three iterations deep in the wrong direction. None of these protocols ever bothered describing what wrong looks like. Didn’t need to; the human on the other end would ask.