Underplayed: The Correlation Premise In Depth

Part [part not set] of 9 in the series Underplayed Premises

Five underplayed premises of TDD includes the correlation premise.

The correlation premise says "internal quality and productivity are directly correlated". Confusions and misunderstandings around this premise abound furiously, so it’s worth taking some time and working it out in detail.

When we say internal quality (IQ) and productivity are directly correlated, we mean that they go up together and they, sadly, go down together. Their trend lines are inextricably linked. The first thing we have to do to parse this is make sense of internal (IQ) vs external (EQ) qualities, because a lot of the confusion starts right there.

External quality (EQ) includes any attribute of a program that a user can experience. Is it correct? EQ. Is it useful? EQ. Is it fast, pretty, graceful, stable, complete? All of this is EQ.

On the other hand, internal quality (IQ) includes any attribute of a program that only a coder can experience. IQ is visible only in the source, and implicitly only by someone who can grasp and manipulate that source. Is it minimal? IQ. Is it tested? IQ. Is it well-named, well-factored or jointed, easy to read? These are all IQ.

The correlation premise says that you can’t trade away IQ to get higher productivity. And there’s the first source of confusion: because you can trade EQ for higher productivity.

This is obviously the case: under nearly all circumstances, it takes more time to make a program faster, or prettier, or to include unusual corner cases, or to be exact instead of approximate. If you don’t care about rare corner cases, that’s less thinking I have to do and less code I have to write. That’s less time, and I can spend that time usefully on other, ideally more important value. So EQ trades pretty easily for productivity.

IQ doesn’t work that way. The reason is because software development productivity is at its base exactly about humans changing code correctly.

If we sat down and wrote what economists call the production function for software development, we’d get some massive complex polynomial. Plug in the variables, give that crank a turn, and get out of it how much production you get in a day or week or year. The three largest terms of that polynomial are these: the skill of the code-changer, the fundamental complexity of the domain, and the "changeability" of the code that has to be changed.

The code-changers’s skill can be improved, but it requires experiences, and the massive demand for software means that every year there are fewer and fewer experienced code-changers available proportionally to serve that demand.

The fundamental domain complexity can also change, usually in the form of a dramatic paradigm shift. Sadly, this is both wildly unpredictable and almost entirely outside of our ready control.

What about the changeability of the starting code? Now, my friends, we are cooking with gas. Because there’s another name for "changeability of the starting code". It’s called "internal quality". All of the IQ things are all of the design principles from the last five decades of geekery are all of the TDD/refactoring things, and every single one of them is about making life easier for humans changing code correctly.

Internal quality can’t be traded for productivity because it’s the most malleable of the top three terms in the software development production function.

There are two more confusions that hamper the newcomer’s thinking about all this, and our trade is swamped with newcomers, so we need to call them out.

The first is the conflation of internal quality with the word "clean", its variants and cognates, and the fifth column of overtones and metaphors that come with it. I strongly oppose this usage, so much so that when someone speaks it I am often almost completely blocked from carrying the discourse further until it’s resolved. When I make function-preserving alterations to code, when I refactor, in other words, I am directly and simply maintaining and increasing the internal quality term of my development production function.

When I do it pre-emptively and habitually, i’m doing it from the mature recognition that there are patterns of high internal quality, and that following those patterns makes me more productive, when I do it immediately prior to touching some code to implement a new story, I am doing it because implementing that story will be easier — faster — in a high-IQ codebase, and refactoring is easier — faster — than interpolating new function in low-IQ code.

I am not "cleaning", because that word is deep-laden with overtones and metaphors that don’t reflect what, how, or why I am refactoring. Most notably, it comes with ideas of optionality and morality, neither of which are present in my idea of refactoring. I won’t belabor the moral thing now, except to point out to non-native speakers of english that whole generations of us were raised on the adage "cleanliness is next to godliness", and for those outside of the judeo-christian tradition, sloth is one of the seven deadly sins.

As for optionality, it’s really a form of delayability ("delay forever"), and that’s the third big confusion around the correlation premise. Here we go. The argument is made, from the "clean" stance but also from entirely separate impulses, that we do have to refactor for high internal quality, but we don’t have to do it now. Assuming a perfect golden codebase, almost any new value I add will make it less golden. (this is a complicated thing, and I don’t want to explain it now, so I’m asking you to trust me on this.)

The "eventually" argument admits that sooner or later we must "re-golden" the base, but that we can delay doing so, because the cost of re-goldening, refactoring, need not be borne until we have to add more value to the code we just added. This is an argument about how soon low-IQ kicks in during the production function. If it isn’t immediate, we can stall, right? And there’s the confusion. You see, it is immediate. Because the lowering of IQ affects the production function so quickly, stalling just isn’t a viable strategy.

You doubt me. That’s okay, join the club. (no, back of the line, please. We’ll announce your number when it’s your turn.) let’s take an easy example.

How soon does a weak name for a variable, method, or class start affecting your production function? Refactorers are obsessed with names and naming.

Well here’s the thing, in programming, the only reason to introduce a variable, method, or class, and to name it, is so you can begin using it. Whatever that cost is, you start paying it the very second time you enter it. And remember the thinking knee, and the clever workaround called "chunking"? Names and their attendant metaphors dramatically affect our ability to chunk.

If you’re old-school enough to have ever had to change computer-generated code, like yacc output for instance, you’ll know something else: it doesn’t take very many weak names to render code virtually un-think-aboutable. Remember that internal quality is all about supporting humans changing code correctly? Anything un-think-aboutable doesn’t just slow down the changing of code. It stops it dead.

So there ya go. The correlation premise says "internal quality and productivity are directly correlated." you can’t trade one to get more of the other. They go up together, and, sadly, they go down together.

Want new posts straight to your inbox once-a-week?
Scroll to Top