Helping Your Downstream Collaborators


Let’s talk about ways to make life easier for downstream developers, be they front-end or dependent back-end people.

(Folks, this is purposeful distraction. Respite is important, so enjoy some geekery, but geekery’s not going to get us out of this mess. Stay safe. Stay strong. Stay kind. Stay angry. Black lives matter.)

Whether you have a bunch of front-end folks driven from your monolith, or you live in service-mesh land with dozens of microservices, the people downstream from you have to use your output to make their output.

They are, in fact, a kind of customer.

We’ve talked at some length about the Made, the Making, and the Makers, and how the trade is normally over-obsessed with the Made, and under-emphasizes both Making and Makers.

In a multi-machine architecture, like the internet stack, we build various backend apps — think "endpoints" — and other makers use these, in turn, in their making. This includes front-end folks, other service teams, testers, and ops.

Because such apps are large, with many parts, the standard over-emphasis on Made has a curious aspect: we don’t think of our service as a "part", we think ever and always of the "whole" thing, a kind of "great final Made".

This style of thinking leads us to ignore those critical customers of ours that aren’t end-users: the downstream makers.

Here are some ideas, no particular order, for ways we can make our small part of the system be useful not just to the end-user, but to the other makers. After all, tho they don’t pay for the privilege, they are, per call, heavier users of our service than any others.

Idea 1: Version your returned payloads inside the payload, from the get-go, and bounce those versions frequently as you go.

Why? Because it’s the beginning of being able to maintain compatibility over time in parallel development. We can’t even start to do that if we can’t authoritatively identify changing payload shapes & data in our messaging.

Why "inline" in the payload, why not just put it on the envelope rather than the letter? Because 1) most transport systems strip the envelope while they work, and 2) different transport systems handle envelope versioning in different ways.

By giving visibility to change, we give our downstreamers numerous ways to use that info. It helps them code, when they have to branch around version, it helps them debug, and get this: when your code is the problem, it helps you know exactly what’s going on.

Idea 2: Similarly, error your broken returned payloads inline in the payload.

Provide an errorcode, provide a text, right in the letter. Two special things to consider: don’t just return empty containers to mean "a problem", and don’t, again, use transport-level errors.

One routinely sees service-dependents stuck with 500 Internal Service errors try to guess what went wrong. Or they take empty containers to mean "a problem" or to not mean "a problem" and head down the garden path.

Idea 3: Get good and fast at adding new endpoints, even when they provide very similar function to existing ones.

If your dependents want payloads in different formats, or they want the data drilled-down sometimes and not at others, do that for them.

If it takes a dependent three calls to do something you could easily do in one endpoint, do it. You can optimize the hell out of data coalescing or sorting or summing far more readily than the downstreamers can.

Evaluate your endpoints not based on some abstract inner Platonic essence of your data, but on whether or not they fit hand-in-glove with the need of your dependents.

In particular, don’t force your dependents to do more calls to get data you already had to have to complete the first call. I see this often.

Idea 4: Maintain a tested local runnable version of your service at head, just as you do the full one.

"Local runnables" are versions of your endpoint-suite that a downstream developer can fire up right inside their box as they work.

This is a big topic, too big to fit in one tip, but the gist of the idea is this: an ultra-light in memory dataset, no between-runs persistence, only the minimal set of objects you need to show off the variant payloads. No security, no load-testing capability, no swagger.

Why? Your downstreamers have to do a great deal of work — real work — that depends in no way on having real data and a real service bus, signin, intranet or vpn. They can do easily 75% of their development against your silly facade, with much greater speed and control.

Do you know how many times one has to hit a page to validate its most basic aspects, layout, validation, button-enabling, and so on? If you think you type and mouse fast, watch a front-ender sometime, blowing through 19 pages to get to the part they want to see.

(Side point: restricting the number of instances so that thirty developers work against the same three at the same time has to be one of the great penny-wise pound-foolish moves in all of geekery.)

The local-runnable concept can lead us in lots of rather cool directions, btw. It’s advanced stuff, look up my old material about UDispatch on the site for some crazy and cool ideas on this.

Idea 5: Ship well-known connection profiles with your app.

If you do have three shared instances, for example, let your downstreamer use nicknames to spit their own configuration block without manual editing. (This one requires collaboration with your specific downstreamers.)

I constantly see downstreamers dead in the water because a well-known upstream has changed its URL and their downstream profile doesn’t know about it. It’s dead easy to do. Don’t use url’s or numbers or complex command lines, use nicknames. ‘target dev3’.

Idea 6: Even without a local-runnable, field a dummy version of your new endpoint an hour after you decide to offer it.

Think in terms of Cockburn’s walking skeleton.

Downstreamers, as we’ve said, spend enormous amounts of time working with data that’s entirely opaque to their code. They absolutely don’t care if the ID points to a real account, or the user name, or anything else. They care about how well-formed payloads are shaped.

Giving them the dummy endpoint lets them do a lot of basic work long before you’re ready to go live. You can even fill out the dummy with valid data as you go, item by item.

Idea 7: Make a contact point, a known name, email, slack handle, whatever, just for downstream developers.

Downstreamers are priority users, because each one fronts for many of your paying end-users (and some managers, too).

Pick one victim on your own team, maybe on a rotating basis, whose priority is to "answer the help line" for other makers. You don’t want this to be ad hoc or casual or based in backchannel networking or to get sorted into a mixed queue. This is a great force-multiplier.

So.

There’s seven ways you can help downstream makers be more productive, at surprisingly little cost to you. The details of all of these vary by context, but the first step, always, is just to bring their needs in to your thinking.

If you start by taking these basic steps, you’ll soon see that there are actually many more possible wins to be had here. Your downstream makers are uber-customers, and if you take good care of them, you can dramatically improve the effectiveness of your multi-app organization.


The GeePaw Podcast

If you love the GeePaw Podcast, show your support with a monthly donation to help keep the content flowing. Support GeePaw Here. You can also show your support by sending in voice messages to be included in the podcasts. These can be questions, comments, etc. Submit Voice Message Here.

1 thought on “Helping Your Downstream Collaborators”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top