The case for moving fast isn’t about shipping features faster. It’s about shortening the feedback loop between action and understanding what actually matters.
Daniel Lemire’s argument about speed clarifies something I’ve seen play out repeatedly in software teams:
Moving fast does not mean that you complete your projects quickly. Projects have many parts, and getting everything right may take a long time. Nevertheless, you should move as fast as you can
The distinction matters. Speed here isn’t about rushed work or cutting corners. It’s about the pace at which you discover what’s actually important versus what you assumed was important when you started.
A common mistake is to spend a lot of time—too much time—on a component of your project that does not matter.
I’ve noticed this pattern most clearly with teams that batch work into long-lived branches or month-long sprints. They invest heavily in architectural decisions early, then discover three weeks later that the core assumption was wrong. This creates commitment to an approach that should have been abandoned. Textbook sunk cost fallacy.
The better mechanism is learning rate:
You learn by making mistakes. The faster you make mistakes, the faster you learn.
This is why small batches work. Trunk-based development with ten-second builds and commits every five minutes doesn’t make you type faster. It makes you discover integration problems in minutes instead of days. The constraint forces you to structure work so that mistakes surface immediately.
Your work degrades, becomes less relevant with time. And if you work slowly, you will be more likely to stick with your slightly obsolete work.
This degradation happens faster than most teams expect. Requirements shift, dependencies update, team understanding evolves. Slow execution means you’re building against yesterday’s context. Worse, the sunk cost of slow work creates inertia. You’ve invested three weeks in an approach, so you rationalize keeping it even when new information suggests pivoting.
I’ve seen this play out with AI-generated code too. When developers get massive PRs from agents after hours of iteration, they’re more likely to accept code that doesn’t quite fit because the alternative means throwing away all that generated work. Fast iteration with small, focused agent tasks lets you discard bad directions cheaply. The pattern from AI-native development of treating throwaway work as exploration strategy only works when iteration is actually cheap.
The interesting question is what “as fast as you can” actually means in practice. It’s not maximum velocity on everything because that’s how you’d end up with technical debt from hasty decisions. It’s maximum velocity on discovering which components matter and which assumptions are wrong. The speed isn’t in the implementation; it’s in the learning loop.
I’m curious whether this transfers cleanly across problem domains. For infrastructure work where changes have blast radius, or regulated industries where validation is expensive, does the same learning-rate logic apply? Or does slow-and-careful genuinely produce better outcomes in some contexts? The mechanism makes sense, but I haven’t seen enough variance to know where it breaks down.