In Formula 1, there are always two versions of the race happening simultaneously.

The version on the pit wall: tire degradation curves updated every sector, gap to competitors measured to the hundredth of a second, undercut windows calculated by simulation, weather models updated every few minutes.

The version in the cockpit: the rear end stepping out under traction on turn nine, the steering vibrating differently since lap thirty-two, the car feeling planted or loose in ways no sensor fully captures, the competitor’s line through the last corner suggesting he’s managing something.

These are not the same race. They are complementary views of the same race. Neither is complete without the other.

The decisions that win championships are not made from the pit wall alone. They are made in the gap between these two readings — in the quality of the interface between the engineer with the data and the driver with the feel.

I think about this constantly when I audit an information system.

Racing circuit at night with pit lane

The undercut and the timing of decisions

The undercut is one of the more elegant strategic concepts in modern Formula 1, and it illustrates something important about how good technical decisions work.

A driver is running second, unable to pass on track. The gap to the leader is stable but not closing. On raw pace, they’re equal. The race appears locked.

The pit wall brings the second car in for fresh tires several laps before the normal window. The driver exits with a ten-second deficit but considerably faster tires. He attacks. He posts fastest lap. Fastest lap. Fastest lap again.

When the leader finally pits, he comes out behind.

The position changed in the pit lane, not on track. The decisive move happened before anyone in the grandstands understood that a move was being made.

What makes the undercut work is not speed. It’s the decision being made before the situation becomes obvious. By the time the undercut window is apparent to everyone — to the commentators, to the rival team, to the fans — it’s already too late to execute it cleanly.

The teams that win on strategy are the ones that decided before the window opened. Not because they had better data. Because they acted on what the data was beginning to show before the picture was complete.

Technical organizations fail in exactly the inverse way. They wait for the situation to be unambiguous. They want the full diagnosis before committing to treatment. By the time the technical debt, the architectural limit, or the team capacity problem is obvious to everyone in the room, the window for a clean resolution has usually closed. What remains is a forced pit stop under pressure, with cold tires, in traffic.

What thresholds don’t tell you

There is a recurring argument I have with technical leaders that I now recognize as structurally identical to the threshold-versus-pattern debate in monitoring.

The threshold argument: if a metric exceeds a value, trigger an alert. Simple, auditable, defensible.

The pattern argument: monitor the shape of what’s happening, not just whether a number has crossed a line.

On a mission for a startup that was in the process of fundraising — brought in by a business angel, the company had significant technical problems they weren’t fully seeing — I built a complete monitoring infrastructure. The ability to ingest every available metric, store it, display it, correlate it.

The CEO wanted threshold-based alerts. Clear, readable, binary. Something exceeds X, notify me.

I built something different. Behavioral detection. Patterns. Graphs that showed the form of what was happening, not just whether a number had crossed a boundary.

He never fully understood why.

His engineers did.

The day they had scaling problems, the pattern had been visible in the graphs for six hours before any threshold was breached. The team could point to exactly what was forming, how it was forming, and why. They could act on the mechanism, not just observe the symptom.

A threshold tells you the tire is hot. A pattern tells you how it’s going to fail and approximately when.

In F1, the difference between a proactive pit stop and a tire failure on the main straight at 300 kilometers per hour is often the team’s ability to read the degradation curve rather than wait for a temperature alert.

The decision that isn’t made where it appears to be made

Hamilton. Silverstone. 2013.

The British Grand Prix, late in the race. Mercedes kept Hamilton out three laps longer than the data suggested was safe. The rear-left tire was degrading faster than the models predicted — an issue that would affect multiple cars that afternoon.

The tire failed at speed on the Hangar Straight.

The data showed the tire was holding. The car wasn’t holding. These are different statements, and they diverged with consequences.

I’ve been brought into COMEX presentations where the data was telling a story. Clean, coherent, well-formatted. The numbers supported the decision the room had already decided to make.

The problem: I had the context. The actual data. Evidence that showed what was being constructed in that room — not an honest reading of the situation, but a political narrative dressed as analysis. The goal was to win an internal argument, not to act in the company’s interest.

What struck me was not the maneuver. What struck me was that it worked.

It worked because the executives in the room didn’t know what questions would break the story. They didn’t know what data existed that the presentation had chosen not to show. They couldn’t tell the difference between a dashboard that reflected reality and a dashboard built to reflect a decision already made.

The pit wall was deciding. On filtered telemetry.

(I should be precise about what I mean here: this isn’t necessarily cynical. People often genuinely believe the story they’re constructing. The mechanism is more subtle than deliberate fraud. When you’ve spent eight months building a case for an architectural decision, you start reading data through that lens. The confirmation isn’t fabricated — it’s selected. The effect is the same. The tire isn’t holding.)

In that particular case, I had already put the necessary things in place. The political decision didn’t change the technical outcome. But it shouldn’t have required that kind of preparation. The failure was upstream — in a governance structure that couldn’t distinguish between telemetry and a story about telemetry.

When the safety car comes out

One of the most operationally revealing moments in Formula 1 is the safety car period.

The race neutralizes. Every car closes up. Teams have ninety seconds, perhaps less, to make a decision: pit and take fresh rubber, or stay out and hold position.

The teams that perform consistently well in these moments share one characteristic: they’ve already made the decision, before the safety car came out.

Not the specific decision for this specific race. The framework. The decision tree. The predefined criteria that determine when you pit under safety car and when you don’t, who has the authority to override the simulation, what information is required before committing.

The worst performances I’ve seen under safety car are from teams trying to improvise the decision criteria in real time while also executing the tactical choice. They’re simultaneously deciding how to decide and deciding. The cognitive load collapses the window.

Technical organizations have safety car moments constantly. An unexpected departure. A security incident. A competitor shipping something that changes the market. A regulatory change with a six-week compliance window.

The organizations that respond well are not necessarily the ones with the fastest leaders or the most data. They’re the ones that have defined, in advance, who decides what, with what minimum information, in what timeframe. Not a process document — a decision protocol. The difference is that a process describes steps. A protocol specifies authority, information minimums, and time constraints.

Most technical organizations have processes. Almost none have decision protocols.

The radio problem

Everything I’ve described so far depends on something that Formula 1 teams have invested enormous effort in getting right and still frequently get wrong: the radio communication between the pit wall and the cockpit.

The driver has three seconds per corner to receive information, process it, respond. The engineer has a firehose of incoming data and a driver whose attention is 95% on the car.

The teams that communicate well have developed an extreme discipline about what gets said, when, and how. Critical information first. Short sentences. Acknowledged, not assumed. They’ve mapped the moments when the driver can process complex information versus moments where only immediate tactical instructions work.

They’ve also developed an explicit protocol for the failure mode: what happens when the driver and the pit wall have conflicting reads of the situation. Not who wins the argument — the procedure for surfacing the conflict fast enough that both pieces of information can inform the decision.

In technical organizations, this translation problem is almost universally unaddressed.

The CTO has the system’s technical reality in his head in a form that took years to develop. The CEO has the business context and the strategic constraints. These two people meet for ninety minutes per week in a format designed for status updates, not for surfacing the kind of complex, ambiguous, partially-formed information that would actually be useful for joint decision-making.

The radio doesn’t work. Not because either party is failing. Because nobody designed the communication protocol for the actual content that needs to travel across it.

What I do from the pit wall

I want to be direct about what role I’m describing here, because it’s easy to misread.

The pit wall doesn’t drive the car better than the driver. It doesn’t have better instincts, better physical feedback, better situational awareness in the cockpit. What it has is a different vantage point — one that the driver structurally cannot occupy while driving.

When I work with a technical organization, I’m not bringing superior technical judgment to the engineers who’ve been inside the system for three years. They know things about that system that I will never know.

What I bring is the view from the pit wall. The ability to read the full picture from a position outside the cockpit. The ability to see what the data is beginning to show before the situation becomes unambiguous. The ability to name what the telemetry is actually measuring versus what the story about the telemetry is claiming.

And critically: no stake in the decision. No architecture to defend. No team to protect. No political position to maintain.

The driver who knows the car is wrong but can’t say so clearly because the team has six months invested in the current setup needs someone at the pit wall who will read the lap times without the attachment.

Most technical organizations don’t have that position filled. Not because they lack competent people. Because everyone is in the cockpit.


The pattern detection that lets you see problems forming before thresholds break is in Looking Inside the Walls. The decision paralysis that keeps cars in the wrong tire window is Pekin Express.