Refactoring Pro-Tip: Scanning Isn’t Just Fast Reading

Refactoring Pro-Tip:

When I’m scanning, I’m not just reading fast, I’m feature-detecting, something the unconscious part of me is very good at doing, especially when the code helps.

Part of the made, the making, and the maker as a guiding theme for me is the idea of "leaning in" to the strengths of my maker body and "leaning out" from its weaknesses.

A trivial case: one reason TDD works so well for me is that the microtests give me a tiny little high every time I make them go from red to green.

(Aside: "red" and "green" are the worst possible choices here for distinct coloration, but c’est la vie.)

That high pleases me, re-energizes me, and resets me for the next red-green cycle. TDD is working here with my body, it is in fact taking advantage during programming of the way my body works when I’m not programming.

We’re used to thinking of our visual experience as being quite raw. Our experience is that we "see" more or less what is "there", with very little cooking of the signal. By metaphor, we think of our visual field as a simple & neutral recorder of the light we can detect.

There have been tens of thousands of ingenious experimental demonstrations that this notion of rawness is false. Finding those and elucidating them is practically a whole industry. But I can give you a taste of these in less than ten seconds.

Close one eye, hold your head and your eye still, and quietly notice everything that’s in your visual field. I can’t even speculate about what’s in that field, but I can assert with complete confidence what isn’t in that field: the hole where your eye isn’t receiving light.

No engineer designed your eye. The light-sensitive cells there all have a wire — nerve — to carry the signal. In humans, that wire comes out in front of the cell, inside the rough sphere of your eye. But to be used, it has to go to your brain, outside that sphere. "Whoops."

Those tiny wires join others and get bigger, and join others and get bigger, and finally all exit the sphere together in one place, on a great fancy multi-strand cable we call the optic nerve. And where that cable leaves the eye? There’s no room for receptors.

So if there’s a hole in the receiver, why isn’t there a hole in your visual field? It’s not tiny, this hole, it takes up maybe 1-2% of that field. You could easily notice 2 black boxes inside a 10×10 grid. But there’s no hole in your visual experience.

The pre-conscious part of me elides the hole before the conscious part of me ever gets it. And that’s just one of literally thousands of transformations it does. Far from being raw, my visual experience is massively cooked, through and through, by the time I’m aware of it.

Most of that cooking is called "feature detection". If you go look outside and hold your head still, almost certainly the first thing that hits your consciousness will be something moving. That’s your feature-detection in action. (The benefits of this bias are obvious.)

We have feature-detectors for verticals, for horizontals, for circles, for things-that-might-be-faces. And we have feature-detectors that are temporal, across multiple frames if you will: it’s moving. it’s getting bigger/smaller. The list goes on and on.

When I speak of scanability vs readability, this cooking of the data is what I’m talking about. For me, scanning is pre-conscious feature-detection. Reading is conscious processing of what’s been scanned.

Didja ever watch a skilled geek go after a bug in a codebase she knows well? (IF NOT WHY NOT?!??) She’s bouncing here, bouncing there, moving so fast u can’t tell what text she’s looking at before she’s off it. The screen flickers.

She’s scanning, she’s feature-detecting.

You can’t read it, but that’s fine, because she’s not reading it either. She’s letting her body do something it does extremely well: detect features of interest in a rapidly moving frame guided by her context.

Two quick points, then let’s do recess. First, about the need for speculative and experimental gathering of data about what is "better" for scanability. Second, about the universality or lack thereof.

I don’t — I think we don’t — know all the ways the human body might take advantage of feature-detection in code. The ways I understand so far have all come backwards if you will, reasoning from my repeated experiences back to an underlying theory, not from theory to experience.

That says to me a certain level of tentativeness & adventurousness is gonna be needed. Tentative cuz all I know so far is what I personally have experienced, which is far from everything. Adventurous for the same reason: I feel I have to keep experimenting for some time to come.

And universality, let me just say this, I have very high confidence that your set of feature-detector-supporters and my set of them will not be identical. Actual humans are actually wired differently from other humans in many interesting ways.

But, the odds seem good to me that there will be significant overlap. If we can collaborate well enough, we can find and capitalize on that overlap. That is the idea behind refactoring for scannability.

The upshot: For me, distinguishing scannability and readability in programming situations is about taking advantage of two different bodily mechanisms that I use in non-programming situations and trying to let my body help me program better.

Aight then. Have an unusual and warming Sunday afternoon!

Want new posts straight to your inbox once-a-week?
Scroll to Top