Optimize for Our Humans

Leading Technical Change is a small, live, remote seminar aimed squarely at a single topic: Real Change in the Real World.

A new cohort is open now: March 11,12,14, & 15, 10am to noon Eastern (UTC-5).

There are just six seats available, so we can drill down on the actual situations in which you’re seeking change.


To give you some of the flavor, let’s talk today about one of the key planks in the LTC trio of strategies:

Optimize For Our Humans (OFOH)

In the LTC, we start by working through a theory of change, based on the concept of cognitive frames, and offering three abstract strategies.

  1. Take Many More Much Smaller Steps (MMMSS)
  2. Optimize For Your Humans (OFYH)
  3. Make Change Normal (MCN)

The three concepts interlock: when we’re actually drilling into techniques and practice, we’ll be looking for combinations that acheive all three strategies at once. They are powerful abstractions, but are quite high-level.

I have written quite a bit about MMMSS, see the link below. The OFOH & MCN material is on the site, too, but in a much less orderly and organized fashion. Today, I want to start to change that for OFOH.


Optimize For Our Humans.

This phrase means that, when we’re seeking change, the first and foremost factor we have to consider is the humans who will actually be doing the new behavior our change requires.

If this seems dreadfully obvious to you, well, you are in the minority.

And if it seems dreadfully simple to you, you haven’t given it enough thought.

In the many years I’ve been doing change in the geek trades, I’ve learned that we will never get the change we want if it isn’t fitted to our humans.

Why is this so important?

Imagine a system whose parts are rules, artifacts, procedures, machines, and people. What do the people do?

Well. They run the machines, they make and move the artifacts, they interpret the rules, they decide when and how to apply them, and they work the procedures.

Their judgement, decisions, and actions are a kind of universal medium through which all the other parts connect, or fail to connect. If that medium isn’t working well, that system won’t work well.

Two cases from the real world can make this a little more clear.

Did you know there is a thing called a "work-to-rule" protest or strike?

In work-to-rule, what happens is that the humans in the system refuse to do anything except exactly what the official rules and procedures say.


The result is inevitable: a dramatic and immediate drop in productivity in all measures.

Work-to-rule strikes are incredibly effective, and dreaded by c-suites everywhere.

The reason they’re so effective is that it isn’t the machines, the rules, the procedures, and the artifacts that make the business run.

It’s the universal medium that connects them together: the humans and their judgement.

Remove that linkage and work grinds to a complete halt.

(Slavery is odious for lots of reasons, but its also wildly inefficient: Slaves exist in a kind of permanent continuous work-to-rule strike.)

Another angle on the same issue:

Sidney Dekker, in his Field Guide to "Human Error", effectively inaugurated what has come to be called Safety 2.0. The book is very readable and direct, and I highly recommend it.

Dekker’s an accident investigator, specializing primarily in airplane-related accidents and near-accidents.

He points out that the overwhelming majority of accident investigations reach an endpoint when they determine a particular mistake by a particular human caused the accident.
He asks why.

Why do such highly-trained and deeply skilled operators make these errors, often very obvious ones, often in complete violation of the rules and the procedures?

(He also asks why the investigations always stop at that point, when, often enough, the problem is just getting interesting and possibly revelatory.)

And he concludes that it’s because the operators are actively solving or trying to solve problems the rules and procedures simply cannot solve by themselves.

Does that sound similar to the work-to-rule situation?

It does: once again, what we’re encountering here is that the function of the humans in a mixed-human-machine-artifact-rule-procedure environment is to serve as the universal linkage between all the other parts.

These systems don’t work if the humans aren’t making judgements, observations, guesses, and decisions in order to provide that linkage.

Dekker goes on to point out that, in many of these situations, the operators are being called on to "satisfice" competing priorities, to run two conflicting procedures simultaneously, or to meet unspoken but real and serious demands.

He also says they are very good at doing this.

He argues, the basecamp of Safety 2.0, that far from being the least safe part, the humans in these complex human-machine-rule-procedure-artifact systems are inevitably found to be the primary source of safety.

We like to tell ourselves that the non-human parts of our system are the critical factor.

When our org is going well, we say it’s because our rules, artifacts, procedures and machines are arranged in the best way. When it isn’t going well, we say it’s because those parts are arranged incorrectly, and we need to re-arrange them.

Hence the endless flood of brand-name software development "methodologies", which exist in more than just our trade, I feel sure, but are absolutely ubiquitous in the geek trade.
Oh. Did I mention my own brand-name methodology, System Utilizing Complex Knowledge-Efficient Reasoning, or SUCKER?
Call now, operators are standing by.

In fact, the systems we’re trying to change usually are not bound by their non-human parts, but by their humans. That is, it’s not the system that’s so very hard to change, it’s the people whose behavior keeps that system afloat.

And if we’re to change those systems, well, we have to find ways to implement change that honor the critical needs and skills of our humans.
Optimize For Our Humans is the abstract advice that reminds us just how important the people really are in the systems we’re trying to change.

We’re not done, far from it. But I hope this gives you the concept as a good sketch.

In coming days, we’ll take up the next piece, which is a conversation about all humans in the abstract and our humans in particular, so we can get some clues about what optimizing for humans might actually mean.

Leading Technical Change is a small, live, remote seminar aimed not at what change we should make, but at how best we can make it.

A new cohort has opened, March 11,12,14, & 15, 10am to noon Eastern (UTC-5).

6 attendees, 4 sessions, 2 hours each, and just one topic: Real Change In the Real World.

There are just six seats available, so we can drill down on the actual situations in which you want to create and nurture change.


If you got this far, boost, please!

Want new posts straight to your inbox once-a-week?
Scroll to Top