Refactoring: Keep It Running

A key value for those who take up the change-harvesting approach: "keep it running". This is actually a direct result of human, local, oriented, taken, iterative, and argues against many finish-line efficiency approaches.

Think of a change as a point A, with an arrow coming out of it and ending at a point B. At the two points, we have a running system, but along the arrow, we don’t: our change is in flight.

The change-harvester seeks to keep those arrows as short as possible.

I say seeks, but that’s a little weak, actually: the change-harvester obsesses over the length of that arrow, and will do a whole lot of things to keep it short, including purposefully stepping away from an idealized straight line towards a target on the horizon.

I see organizations sometimes try to make a change happen by taking the system entirely apart, putting the pieces on oily newspapers spread all over the living room floor, and then putting it back together again.

There’s a whole conceptual cluster standing behind this approach, involving premises like "we know exactly what needs to be done", "we know exactly how long it will take", "ultimately it will mean small changes to every part", "we’ve stored up enough value to do it", and so on.

These ideas add up to "it’s cheaper to take the system apart all at once and put it back together all at once." When we dig in to that analysis, though, what we usually see is how many of the actual costs of working this way are being ignored.

Consider: at point A, the system is producing value. It’s not optimal value, of course, or we wouldn’t be aiming at a point Z on the horizon. But it’s still value. Spread out on the living room floor, your motorcycle isn’t producing value A or value Z. It’s producing zero.

This means we have to store up enough value to last us through the arrow. Because storing up value is hard, we’ll want to know just how much value it’s going to take, and for how long, and we’ll want to have a very high confidence in those numbers.

Consider: while we’re on the arrow, we can’t do anything but work on the arrow. We can’t interrupt getting the system back to "running", because the value clock is ticking.

This means that we’re not "steerable". We can’t respond effectively to new input, because we’re on the arrow. If marketing scores a new massive contract based on only three small changes, we can’t do them: we’re on the arrow with a broken system on our hands.

Consider: The cost of being wrong about any of our decision-making calculus is pretty significant. If we discover we’re wrong, we have only two choices, throw the work away or come up with a solution that keeps us on the arrow.

If we throw the work away, we’ve lost all the value we put into it. If we come up with a solution, we’ll be doing so under enormous pressure, psychological, temporal, and financial.

The long arrow raises the stakes dramatically — non-linearly — over the short arrows. Those higher stakes force us to a level of caution and a standard of certainty that slow us way down. Further, such major stakes inevitably embroil us in complex politics.

So what’s the change-harvester’s alternative? It says it’s more important that the system be continuously running, as we make our way towards Z, than almost anything else, including whether any given local change lies on the direct idealized line to Z.

We call this "local, oriented". Each arrow is short, and we value its brevity enormously. We look to point Z, to be sure, and we prefer that each short arrow moves us Z-ward, but if shortness and Z-ward-ness come into conflict, we pick shortness.

This aggravates some folks, because of their very high confidence in their ability to work through all that calculus we went over above. But the change-harvester offers three reasons why that confidence just isn’t justified.

First, the calculus-confidence is unjustified by looking at actual results in the actual field. Half of all large projects run both over-budget and over-schedule. Both. Yourdon popularized the phrase "death march" as a result of this, and most of you have seen or been in one.

Second, the calculus-confidence is over-simple. It seems strange to say, because it’s incredibly complex, but it’s true. It ignores a variety of important factors, including not least the human value of rhythm: short arrows feed our humans much more effectively than stored value.

Third, the calculus-confidence is over-mechanical. All the systems we try this in involve lots of people, and even if "code" isn’t complex-adaptive, "coding" certainly is. The assumptions of linear error are just not viable in complex-adaptive systems.

So, in change-harvesting, we don’t just prefer "small arrows" of change, we insist on them. We’re not blind to point Z out there on the horizon, but we optimize for "keep it running" over "get to Z".

In the next few days, maybe we’ll look at some real cases. I’m in the middle of a bunch of short-arrow Z-ward changes in the kontentment app.

Meanwhile, have a lovely sunday full of short arrows.

And, uhhh, stay home, yeah? I need y’all healthy.

Want new posts straight to your inbox once-a-week?
Scroll to Top