Pekin Express. Two people. One euro a day. No GPS. No hotel reservation. No plan beyond “get to the checkpoint before the elimination.”
The show has been running for years. I’ve watched enough seasons to have a clear, uncomfortable opinion about why most teams lose. It’s not the one the show wants you to have.
The teams who fail aren’t the ones without skills. They’re the ones who can’t stop waiting for better conditions before deciding to move.

The truck you didn’t stop
There’s a pattern that appears in almost every season.
A team is standing by the side of a road in the middle of nowhere. A truck drives past. They hesitate. Is it going in the right direction? Do they speak enough of the language? What if there’s a better option in twenty minutes?
The truck is gone. Another team fifty meters away ran to the road the moment they heard the engine, gestured something universally understandable, and is now in the cab while the first team finishes its deliberation.
The difference between these two teams is not intelligence. Not preparation. Not experience.
It’s the tolerance for being wrong.
The team that stopped the truck doesn’t know where it’s going. They might get out in three kilometers because it’s heading the wrong direction. They accept that. They’ve built into their operating model the assumption that some decisions will be wrong, and that speed of correction matters more than accuracy of first attempt.
The team that waited had a different implicit model: make fewer, better decisions. Wait for more information. Optimize the choice. They’re still standing by the road when the other team reaches the checkpoint.
“We need more data” is not an analytical statement
I want to name something that gets dressed up in analytical language constantly in technical organizations.
“We need more data before we can decide on the architecture.”
“We need another quarter of metrics before we can evaluate the team’s performance.”
“We need to complete the RFP process before we can assess whether to change vendors.”
These statements sound rigorous. They position the speaker as careful, methodical, evidence-based. In my experience, approximately half the time they’re accurate. And the other half of the time, they’re a political maneuver dressed as analytical discipline.
Here’s how you tell the difference: ask what specific data point, if provided, would change the decision. If the answer is clear and concrete — “if retention drops below 85%, we change the approach” — it’s analytical. If the answer is vague — “we just need to have a clearer picture” — it’s not.
The clearer picture is never coming. The picture is always incomplete. The decision is being avoided because making it creates accountability for the outcome, and not making it preserves the ability to say “we hadn’t decided yet” if things go wrong.
In Pekin Express, that team is still standing by the road when the show ends.
What constraint actually does
I spent years believing that resources solve problems. More budget, more time, more people, more clarity before committing. I was wrong, and I’ve watched enough technical organizations confirm it that I’m no longer polite about this.
Constraint doesn’t just create urgency. It changes the decision-making process itself.
When you have one euro a day, you stop optimizing and start deciding. Not chaotically — strategically. Every interaction has to count. You develop a sharp instinct for what’s worth trying and what isn’t, because the cost of a failed attempt is real and immediate. You become economical with your hypotheses.
The teams with unlimited resources spend three months planning and ship something with seventeen dependencies, four of which nobody has time to maintain. The constraint teams ship in six weeks, own every decision they made, and can explain the reasoning for each one.
I’ve seen this repeatedly in technical organizations. The teams that produced the most coherent work were not the ones with the most resources. They were the ones who had internalized a constraint-first mindset — who built features you could actually ship with the team you had, who made architecture decisions you could actually maintain at the current headcount, who said no to things that were theoretically good but practically impossible given the actual operating conditions.
Unlimited budget doesn’t produce good decisions. Clarity about constraints does.
Recovery speed is the measurement that doesn’t exist
The teams who win Pekin Express are not the ones who make the best first decisions.
They’re the ones who recover fastest when their decisions don’t work out.
They get dropped off in the wrong city. They adapt immediately. They stop a truck, discover it’s going the wrong direction, get out three kilometers later, and try again. They don’t spend twenty minutes processing the mistake. They’re already working the next problem.
This is the capability that actually separates high-functioning technical organizations from struggling ones. And I’ll be direct: most companies hire for first-decision quality and have no idea how to measure recovery speed.
Job interviews ask how you solved the hard problem. They don’t ask how long it took your team to acknowledge that an approach wasn’t working and change course. They don’t ask how you managed the transition from “we’re committed to this architecture” to “this architecture is wrong and we need to move” without losing six months of progress and three engineers who burned out during the pivot.
Recovery speed is the metric. A team that decides carefully but recovers slowly will get lapped by a team that moves quickly, fails occasionally, and corrects without drama. Every time.
The constraint is already there
The executives I find most effective to work with share one characteristic: they want to understand the actual state of their technical system, including the uncomfortable parts, specifically because a decision is coming and they want to make it with accurate information. Not to delay the decision. To make it faster and better.
The other kind — the one who commissions a study to validate a direction already decided — wants something else. They want coverage, not diagnosis. They’re not standing by the road waiting for a truck. They’re waiting for someone to confirm that waiting was the right strategy.
I’m not useful for that second scenario.
What I find useful to say, clearly: your technical debt is already accumulating. Your architecture already has limits your team is navigating around every sprint. Your system is already gaining complexity at a rate your current headcount cannot absorb indefinitely.
That’s not a future risk. That’s the road you’re currently standing on. The truck has been passing for months.
The question is not whether to move. The question is whether you’re going to build a decision-making model that lets you move with 60% of the information, recover fast when you’re wrong, and make the next decision without relitigating the last one.
Or whether you’re going to wait for better conditions.
They’re not coming.
The authority structure that makes the first decision stick — or not — is in Three Seconds. The full diagnostic picture before you move is Looking Inside the Walls.