I want to talk about value today, and especially want to consider an idea I call multivalence, which seems quite central to putting the change-harvesting ideas to work.
I recently chanced across a timeline convo that by asked what we should call things it would be good to achieve that weren’t things that were directly visible to the customer.
“What do we call valuable things that aren’t ‘value’?’
The answer I gave: “value”.
Meanwhile, in another part of the forest, a few of us were schmoozing about the fairly straightforward traditional resistance to “pairing”, and where the backing ideas for that resistance come from.
The gist of that pairing-resistance looks like this: the stats suggest that the “story point velocity” of pairing is slightly lower than that of soloing. (For the record, there is very little hard data on this. What we have does suggest that, but we don’t have much.)
If we assume that the small data we have is actually valid, this makes it pretty easy to argue that we should not pair. And here’s the thing: to the extent that “value” is defined as “story checked off”, that’s a reasonable case.
In a third case of this kind of reasoning, folks argue that testing after the code is written has the same merit as testing before/while writing the code. I argue against this view, which centers the value of the tests in their textual artifacts rather than their impact.
Finally, in a fourth case, I could argue against writing a muse today. I have a really important story I should be working on, but I’m not, I’m doing this instead.
Life-events I won’t go into have sucker-punched my family this week, and the support role I’m in has me wired & tired & well outside of normalcy. If we value the writing as “content” only, it makes no sense to write this, only to write the other more important “content”.
In all four of these patterns, the glitch in the reasoning is “monovalence”. It is happening because we are defining value as a kind of narrow singularity, a one-dimensional line where we make our decision based on a single attribute of a context.
Monovalence is creating a kind of value “one ring to rule them all” then either coercing any other type of value into that one ring or ignoring it when the decision-maker can’t figure out how, or just as commonly, forgets.
In fact, there’s lots of value in a dynamic unity that creates software that is not visible to the customer. Some of it’s easily coercible, the kind of acts we call “axe sharpening”. Some less so, things like enabling humans to work at all.
In fact, there’s much more to pairing than just “feature checked off”. That same small dataset that suggests that pairing is inefficient goes on to add, for instance, “has fewer bugs”. And there’s more to it than just that, too:
“stories safely shippable”, “bus (lottery) number”, “employee juice”, “internal quality”, “increased skillset”, and “external quality” are all quite valuable, and all affected by pairing & mobbing.
In fact, much of the value of TDD is not vested artifactual but operational: it changes the output artifacts to be sure, but it significantly improves the creation of those artifacts as well.
And in fact, writing this muse is giving me a tiny island of normalcy and respite that will give me renewed calm, stability, and the extra dose of energy I need to do something far more important than “content”: supporting the people I love though their current trials.
A couple of years back, I started these conversations by talking about how our trade takes the needed triple-balance of the Made, the Making, and the Makers, and unbalances it by focusing so heavily on only the Made.
This is monovalence. Not every value we have can be easily coerced into something we can measure about the Made. And, sadly, it is all too easy in monovalent contexts to reason one’s way into terrible decision-making.
In order to harvest the value of change effectively, it is necessary that we hold that value to be of multiple kinds. We have to resist the pressure to abstract, reduce, and coerce these values into one ring to rule them all.
If we don’t, especially in complex systems, we will embrace disastrous policy, because it over-favors the narrow singular target while ignoring the broad range of value we actually need to get to that target.
I’m going to wrap this up, but for homework this time, reach back as far as your memory of the trade goes, to any policy or practice you’ve seen that you thought was outrageously net-negative.
Look back, or maybe look around, but either way, look for this: was that policy deemed legitimate because it was argued from monovalence? I think you will find quite a few of them that were.
GeePaw’s Camerata is a community of software developers leading in changing the industry. Becoming a member also gives you exclusive access and discounts on the site.
The GeePaw Podcast
If you love the GeePaw Podcast, show your support with a monthly donation to help keep the content flowing. Support GeePaw Here. You can also show your support by sending in voice messages to be included in the podcasts. These can be questions, comments, etc. Submit Voice Message Here.