Microtest TDD: More Definition


What’s a microtest, anyway?

I write a ton of tests as I’m building code, and the majority of these are a particular kind or style of test, the microtest kind.

Let’s talk about what I mean by that, today, then we’ll talk later about how that turns out to help me so much.

A microtest is a small, fast, precise, easy-to-invoke/read/write/debug chunk of code that exercises a single particular path through another chunk of code containing the branching logic from my shipping app.

Microtests are first-class source code, maintained and kept side-by-side in the vault with the source code that makes up the shipping app. They are maintained with the same level of attention, the same standard of excellence, as the production source is.

Microtests are collected together and are invoked in a separate app: think of having two mains in one source base, tho it’s usually done implicitly with tooling. They don’t run inside or outside the shipping app, they run in a testrunner app.

The connection between test and production in the microtest world is a source connection. The production source stands alone, and the microtesting source depends on it. It’s not a binary dependency, but a source-textual one.

Here’s a snippet that gives you the source-dependency sense, and shows off another few features of a typical microtest.

AspectRatio

The most notable feature of microtests is size, hence the name. "Micro", here, is actually a stand-in word covering several dimensions. Microtests are literally short, they test very tiny branches of my code, they have very few dependencies, they run very quickly.

This "micro-ness" is critical, and it forces very severe constraints on what I test and how I test. The consequences of the constraint ultimately spill over into design considerations. Designs have to be shaped to be microtestable or else they won’t be.

Speed and precision, in particular, dominate most other concerns, especially those that derive from classical "whole unit" or "whole app" approaches.

Microtests don’t prove my app is the right app. They don’t even prove my app works. They prove that tiny parts of it do exactly what I wanted them to do, no more and no less. We’ll talk another time about the economics of this, why it is a successful strategy tho it’s indirect.

With that background, let me enumerate the qualities of normal microtests. I stress, all of this requires experience & judgment, and "never" is a very long time. None of the following are meant as "always/never" or of border conditions, but are right down the middle of the idea.

Microtests are small in textual form. Typically under a dozen lines of code, occasionally as many as 20.

Microtests run as a separate app, the testrunner, which is entirely self-contained. They’re not running either inside or outside a running version of the shipping app.

Microtests invoke a tiny portion of the shipping source, most usually a single case of the branching logic in a single method of a single object.

Microtests are neither black-box nor white-box, but a murky but soothing shade of gray. They read as if they were black-box, but they quite often rely on knowledge only someone with the source code under test would have.

Microtests are vault-committed source, normally in the same or identical side-by-side tree structure as the shipping source, and they are coded and maintained to the same high standard as the shipping source is.

Microtests form a gateway-to-commit. That is, I am willing to push my code to head any time all of the microtests in the app run and pass. I am unwilling to push if there are any failing microtests. In some apps, with careful work, this is possibly even a gateway-to-deploy.

Microtests are ridiculously fast, and they have to be. Think in terms of under 10 ms in the majority case. This is because we run them hundreds of times a day.

Microtests don’t for the most part use the various frameworks and libraries in the shipping app, unless using those can be done easily and quickly enough. Even when they use those, they aren’t trying to test those, just using them.

Microtests rarely touch the database, rarely cross thread or process boundaries, rarely touch the filesystem, rarely touch the screen or the printer.

Microtests provide precise feedback about what they are testing, usually through good naming. When they fail, they provide context-aware messages about the assertions that failed, sometimes including explicit contextual text.

Microtests avoid "awkward collaborations", those that would make them slower, or larger, or harder to invoke-read-write-debug, through a variety of faking techniques, where we substitute graceful collaborators for awkward ones.

Microtests form a collection, and all or any subset of that collection is runnable with a single programmer gesture, in any order, with no implicit dependencies, only what is expressly contained in their textual source and its includes/imports.

So that’s it, for now, in terms of feature-listing. If I’ve met my goal at all, and you’re new or new-ish to the concept, you should, at this point, have about a million questions.

That is good.

I can’t address them all just now. But I will explain why it’s good you have them.

First, because this microtest concept is far from obvious in utility. Look, we are a long long way from the traditional theory about "test" and "quality".

Second, because the consequences of working in microtests spread so far beyond simple "just get it to work" issues. Microtest TDD isn’t an addition to my toolkit for solving the programming problem, it’s a whole way of seeing that problem.

Third, microtest TDD isn’t some pre-packaged judgment-free algorithm for programming. Here I want to draw the biggest fattest line in the sand I ever draw: there is no judgment-free algorithm for programming. No matter who is selling it or how good their pitch.

I’m going to wrap this one, and we’ll have a new muse in a day or two looking at the idea from several different angles, starting with the economics probably.

In the meantime, AMA: I love hearing from people, and your responses give me terrific juice for new writing. 🙂


The GeePaw Podcast

If you love the GeePaw Podcast, show your support with a monthly donation to help keep the content flowing. Support GeePaw Here. You can also show your support by sending in voice messages to be included in the podcasts. These can be questions, comments, etc. Submit Voice Message Here.

Want new posts straight to your inbox once-a-week?

2 thoughts on “Microtest TDD: More Definition”

  1. Great article!

    I’m a bit stricter, i.e. the paragraph “Microtests rarely touch the database…” is too lax. 🙂

    My microtests *never*:
    – touch the database
    – cross thread or process boundaries
    – touch the filesystem, or any other OS periphery (screen, printer, clock, keyboard, mouse)
    – use 3rd party libs

    1. 🙂 I hear you.

      I only have two always/never rules.

      1) *Always* suspect always and never.
      2) *Always* put the cornbread at the bottom of the bowl and the chili over it.

Comments are closed.

Scroll to Top