What amount of time passes between you saving the file you just changed and you seeing the results of that change?
If that answer is over 10 seconds you might want to wonder if you can make it shorter. If that answer is over 100 seconds, please consider making a radical change in your approach.
Thinking is the bottleneck, not typing, we’ve been over this a bunch. But waiting you can bypass — all waiting you can bypass — is waste.
I know firmware folks who routine wait 20 minutes from save to run. Why? Because they are running on the target, and that means sending the image to the target, stashing it in the NVRAM and rebooting. Every time. Wow, that must be some intricate h/w-dependent rocket science stuff they’re working, eh?
And sometimes the answer is yes. But I’ll let you in on a secret. Not usually. Usually it’s logic that’s got almost nothing to do with the peculiarities of their h/w target. I’m not joking. Http and ftp servers, things like scheduling — calendar scheduling, not multi-task scheduling.
Before we all pile on, tho, maybe we better check ourselves on this.
- Do you have to bounce a server or a service inside a container to see your results? Do you have to switch to a browser? Do you have to login and step through three pages to get to the right place?
- Do you have to manually or automatedly clear the cache or wipe the database? Do you have to go look up stuff in a special place?
- Do you have to edit a config file, or just quick like a bunny bounce to the shell and type in those same 87 characters again (except for the three in the middle)?
And checking the results. Do you do that in the browser, too? Or maybe you study the log output. Or, again, bounce to the shell and tweak 3 characters of the 87. Of course, maybe you don’t check the results at all, one does see that from time to time. "it ran, so it worked." if I don’t know whether my change did what I wanted in 10 seconds I get grouchy. Sometimes I suffer it for an hour, cuz the change I’m making is the one that will free me to get the faster feedback. But generally, long wait-states between change & results turn me into Bad GeePaw.
The answer, nearly every time, is to use one of several techniques to move your code-you-want-to-test from a place where that takes time to a place where that’s instant. Sometimes doing this is a heavy investment the first time. Most non-tdd’ed code is structured in one of several ways that make this difficult. Demeter wildness, intermixing awkward collaboration with basic business logic, god objects, dead guy platonic forms, over-generalization, primitive obsession, all of these work against readily pulling your code apart for testing it.
The trick is always one step at a time. Remember the conversation the other day about supplier and supplied? Start there.
You’re either adding a new function or changing an old one, yeah? If you’re greenfield in your tiny local function’s context, try this: put it in a new object that only does that tiny thing. It sits by itself. Declare one in a test and call it. Lightning speed.
If you’re brownfield, you can do the same thing, but it’s harder. The key is to first extract just exactly as much of the brown field code you need to its own method. Then generally pass it arguments rather than using fields. Now rework it for supplier/supplied changes. Finally, once again, pull it to a new class. Note: you don’t have to change the existing API to do this. That API can new the object and call it, or if it’s of broad use, new it in the calling site’s constructor.
The hardest problem you’ll meet: high fan-out methods or classes. These are classes that depend directly on every other package in your universe. For these, well, don’t start there. 🙂 legacy rework is not for sissy’s.