The Gold Of Microtests: The Value

Part [part not set] of 3 in the series The Gold of Microtests

Okay, twice now i’ve intro’d this question about the value of m-proof, the rather trivial proof microtests give: what the geek said is what the computer heard is what the geek wanted.

Time to just deal with it, eh?

Remember that all of the value that comes from microtests comes before we’re done with the program. As we’ll see, some of it comes from the fact that the artifacts — the microtests — exist at all, and some of it comes from the fact that making them changes our "way".

The microtests have lots of little impacts. In no particular order then, these are the ones I think I see.

Before I create them I have to make them doable by me. Doing so is not free, and it provides a bunch of constraints that limit my choices. Each of those constraints has an impact.

Impact: microtests are small and fast and precise, so what they can test must be correspondingly small and fast and precise. In order to even start writing the test, I have to shape the thing being tested so that it’s testable in this way. Some cases will help make this clear.

I don’t hit databases in microtests unless I am specifically testing a specific query at the level of SQL. Databases aren’t small or fast (or easily controllable, we’ll come to that.) that means that if a piece of logic is interesting and depends on a query’s results, I have to shape the invocation so that I can call the logic without calling the query.

That same pattern is ubiquitous in using microtests, especially in the bog-standard IT world, w/its database at the bottom and it’s html transport at the top. That is, I also only write test-through-ui microtests when that ui-code is invocable w/o a backend.

Impact: microtests must absolutely control the input values and access the output values. That means, to write them, again, I must carefully isolate inputs & outputs rather than binding it all together.

If my outputs are actually stochastic, for instance, based on a PRNG, I have to own that PRNG’s behavior to microtest them. There is no other way around it. Which means I have to be able to pass the PRNG I want, not just instantiate one inline or use a global call.

If a microtest makes a call to a framework or library that I can’t "just call", I have to isolate the code I want to test from that call.

So the gist of these impacts is just this: they change the shape of the code I am about to write.

(This is the steering premise from my underplayed premises stuff, here, http://geepawhill.org/five-underplayed-premises-of-tdd-2).

There are other similar impacts. What they add up to is a code base, a shipping code base, that has a different shape than if I did not write the microtests.

because of m-proof, my TDD’d code will have smaller classes, smaller methods, more immutability, far higher collaboration. It will have fewer inline "new’s", more very-small interfaces, far more delegation. It will tend towards pluggability.

The impact of all of these is huge. Many of us believe that the combined impact is a remarkable drive towards what we now regard and teach as "generic software design principles", stuff like SOLID, or the DRY concept.
I think of it this way: I can remember all the principles, study design, master my patterns, and so on, or I can just make sure all the branches and calculations in my code are microtested.

Either way, I get the effect I wanted.

The microtests, and their dumb little m-proof’s aren’t done adding value yet. Far from it.

Everything we have so far mentioned would be important and valuable even if all code were write-right, write-once, and write-solo.

Write-right meaning "we never make a mistake in our typing". Write-once meaning "we never change code we’ve already shipped". Write-solo meaning "we never work with other people’s code".

But in fact, all three of those are completely bogus propositions in the world of a professional software geek.

(Sadly, they’re not always considered bogus in the mind of a professional software geek. That’s why i’m out here doing this.)

And here’s where the microtests and the dinky m-proofs really shine. When we make mistakes, when we change things we’ve shipped, when we work with other people, the m-proofs are invaluable. We’ve gone from useless dust to interesting iron already, now we approach gold.

Impact: the microtests force one person to say the same thing in two different ways, once in the code, and once in the microtest. The number of — technical term here — dumb-assed defects this catches is — technical term here — gigundous.

It’s nearly impossible to grasp this without experiencing it IRL. We might notice ourselves making dumb-assed mistakes all day long, but it is incredibly hard for the human mind to really see how many of them we make or how much time making them wastes.

Microtests kill those.

Impact: microtests in shipping code that i’m about to change mean that I can not possibly break what is shipping without discovering it before I push.

This is easier to grasp w/o experience, but also easy to underrate value-wise. Think about your defect lists. How many of those shipping defects were in code that was working just fine last month? How many were the result of fixes? Or fix-fixes, or fix-fix-fixes?

Microtests don’t kill this kind of thing as well as they kill dumbassedness, but they do dramatically improve your defect rate.

(aside in a muse that’s already long: my first big success converting a victim to TDD came from a fix he and I put in to microtested code. It made our new test green and broke 40-odd old tests. We went for a smoke. Gary said to me: "huh. I guess this TDD shit works, doesn’t it?")

Impact: microtests provide a kind of executable spec — a document you can run — for any code you either have never seen or do not remember. The perfect answer for "how do I use this thing?" is "watch how its tests use it." two reasons this matter.

First, because the modern geek is a collaborator something like 95% of the time. The size of modern applications requires this. Working with other people’s code is critical to our ability to ship. And there’s another reason related to size.

Second, because modern applications are unbelievably large and intricate. A typical boring database-to-web app uses five formal languages and has dozens of modules, some of which have hundreds or thousands of APIs.

It is flat impossible for one person to hold in their heads.

This means that I am constantly in the situation where I have to work with stuff — even stuff I wrote — that I do not know. You can laugh in my case cuz i’m elderly, and that can include code I wrote yesterday morning. But it’s true for all of us.

This "executable spec" impact is a tremendous force-multiplier. It allows me to pick up a new thing and use it far more quickly than a written spec, or, more commonly, random guessing based on API name.

And if I have a question anyway about how something works? If it’s microtested it’s microtestable: if the test i’m curious about doesn’t exist, I can roll it myself and find out the answer.

One more impact, then i’mo cut you loose for the day.

Impact: microtests run in a separate app that is not the shipping app. Anything I do with or through them is almost certain to be faster, cleaner, more stable, more accessible, and more precise.

I watch geeks in non-TDD shops do a thing all day long: compile. Launch. Wait. Wait. Wait. Type 3 characters, click twice. Wait. Wait. Wait. Remember to change the config. Wait. Wait. Wait.

Once you notice this, you realize the amount of "dead time" is just staggering.

Microtesting doesn’t eliminate running of the shipping app, that’d be a ludicrous statement. But an overwhelming majority of the time, the small changes we are makng change aspects of that app that constitute < 1/10000th of what it can do.

And in any case, if my little change doesn’t even do what I wanted it to do, what’s the point of firing up the app to discover that it doesn’t have the downstream effect I wanted it to have?

I just have to wrap this monster pretty quick. One thought and one foreshadowing, and we’re on our way…

Thought: like so much of this movement’s styles, microtests and TDD aren’t meant to make hard problems easy. They’re meant to make easy problems easy. That sounds like small change, but it’s not, cuz the old-school way makes all problems, easy or hard, cost the same amount.

Foreshadowing: I still haven’t connected all this to the triad fully. If you buy my crazy theories, the movement is founded on expanding the focus past "the made", to "the making" and the "the maker".

We’ll get there, but I may stop for a day or two and do something else.

Thank you for reading this great baggy muse.

Have a strange saturday night!

Want new posts straight to your inbox once-a-week?
Scroll to Top