Dear smart geeks: stop worrying so much about whether you’re gonna get it wrong, you definitely are gonna get it wrong.
How do I approach this? Hmmmm. Okay, I see a couple of threads that need to be pulled together. This one’s gonna be clunky, I fear. Ahhh well. I’m definitely gonna get it wrong, too, I spoze. 🙂 the drive to be right is a powerful one. For some of us, perhaps, too powerful. But it does come naturally. Think about when u first started geeking. You were surrounded by all these damned puzzles. And you were confused, and there was syntax, and semantics, and the desire to make it work, and so on. So, you learned. You thot. You had experiments. You debugged. You read. You talked & listened. And you got better, yeah?
Problems that seemed very complex became actually quite straightforward. U became a geek. And over time, your specialty, or specialties given enough time, became nearly transparent to you. The problems started even to get kinda boring, truth to tell. So you widened your scope. Bigger problems. Better puzzles. And it likely went on for a while. New challenges to the mind. New confusions, discussions, experiments, and then — new skills.
The problems got pretty big, and for a long time we just described them as big problems. And it felt like the problems were still solvable "from the start" if you could just think hard enough. And now we get an insight. No. They’re not. They’re not solvable if you just could think hard enough.
Now having played to that little point, we have to go backwards a little. It sounds like i’m saying the problems are too big. Like, oh, beyond a certain size, there’s never any hope. But i’m not saying that. What i’m saying is that we experienced the unsolvables at a certain size, but not directly because of that size.
Rather. What happened is the problems got big enough they began to incorporate elements that prevent "getting it right the first time." in other words, the size exposed you to those elements. The elements were always there. Are always there. They don’t mystically appear at size X. Rather, you’ve just begun to incorporate them at whatever your size X happened to be.
Those elements form a conceptual category we still have a ton of words for, but no handy guaranteed label. Here are some words that gesture towards them: ecological. Organic. Complex (systems sense), chaotic (math sense).
A metaphor? You’ve been solving ever larger problems by being ever more comfortable with the way the balls move on the pool table. Pool balls move the same way every time (to an arbitrary epsilon). U hit it this way, it goes that way. Newtonian physics relies on this.
What if your pool balls don’t act reliably, stably, predictably, mechanically?
When that happens, and it happens everywhere in actual professional geekery, as opposed to "coding", well. The game changes quite a bit. These odd pool balls, move further or less than they should, move at angles that defy prediction, are not simple newtonian objects. Those balls are the elements of very different problems than the ones you’re used to.
What are these elements? For the moment, let’s call them agents. Why agents, cuz they seem to have an agency, a motive force, all their own. You could think of them as people, if you’re old enough and mentally sound enough to understand that other people are subjects, not objects.
Think back on all those coding problems. They were hard, no doubt, and it was cool that you solved them. But they didn’t involve (much) agency. Here’s the thing about code. Code does the same fucking thing every fucking time.
Yes, yes, i’m sure i’m to be regaled with weird events that seem like exceptions. C’mon. Stop it. Von neumann computers are deterministic. Give them the same inputs and you get the same outputs, every single time. (aside: places in code where we model non-determinism are in fact fascinatingly difficult.
Turns out, making ‘puters stochastic is hard.) anyway, down to the final chase scene. Problems that incorporate agents are not solvable the way problems that don’t incorporate them are. And professional geekery problems lasting longer than a week or two, incorporate agency all over the place.
So? A couple of conclusions and i’ll let it go.
- first, stop beating yourself stupid with the “sit and think to be sure you get everything right” thing.
- second, stop arguing about what will happen in a year. Everything that will happen to your system a year from now is *suffused* w/agents.
- third, ponder in your copious free time whether approaches to solving agency-laden problems ALSO work well for non-agency problems.
- fourth, act then look, act then look, act then look. Because of the unpredictability of agency, you’ve little other choice.
- so? Don’t worry so much about whether you’re gonna be wrong. You’re definitely gonna be wrong.
The trick isn’t never being wrong. The trick is trying a thing, seeing what’s wrong, and moving to try something else, all quickly.