Future-Proofing Considered Harmful

Written by stevekonves | Published 2018/01/04
Tech Story Tags: software-development | project-management | future-proofing | software-engineering | agile

TLDRvia the TL;DR App

Agility beats robustness when it comes to planning for the future

“Future-proof” software is often “present-day-proof” yet still vulnerable to future changes. What do I mean by this? Consider the following:

The General Problem: https://xkcd.com/974/

Let’s break down the comic and then take a look at how spot-on it is when applied to software.

How to not pass salt

The process of passing salt is simple. We all know what salt is. We all know where it needs to go. The act of “passing” is well-defined. There is zero ambiguity. But anything worth engineering is worth over-engineering, so how can we over-complicate the dead-simple process of passing salt?

The answer is obvious: attempt to predict the future and then delay everything until we’re able to perform all predicted actions. We’re clever, so we note that while salt and pepper come in shakers, sugar (while similarly granulated) does not. So let’s devise a system that passes any type of granule in any type of container. Boom! That’s the perfect future-proof system. But just as we finish our premature celebration, two things inevitably go wrong.

First, we realize that we guessed poorly. After a few predictable requests for salt and pepper, we get a request for a packet of granules are all, in turn, nested in one single dish. We didn’t account for a n-deep nesting of containers, so we need to make a few tweaks just to pass the Splenda. Of course, those changes are much more complicated than simply sliding a tray of artificial sweeteners. And then comes the ketchup. A non-Newtonian fluid in an upside-down squeeze bottle? We can’t do that. In fact, the requester probably doesn’t even want that. What they REALLY want is more salt, right? OK fine, we’ll get the ketchup, but it will take at least another 2 months and cost a quarter million dollars.

And that is the second problem: guessing imperfectly complicates and therefore drives up the cost of EVERYTHING. At the beginning, we delay the original request for salt so that we include the ability to pass pepper and sugar. The salt finally arrives, but that happens long after the food is cold. Then, when a request is made for something that we didn’t predict (Splenda or ketchup), it is ridiculously hard to get those to the user. Oh, and we never even needed the sugar.

The simple truth is that when asked for salt, we could have passed just the salt in seconds. Then when asked for a packet of Splenda, we could have figured out how to pass either salt or Splenda in another few seconds. Sure, it may have introduced a bit of rework on the salt-passing procedure, but it’s still faster than making the user wait 20 minutes and still have no salt. We also saved a ton of coin not having to bring in those pretentious squeeze bottle consultants.

How to not deliver software

Obviously, such a tale is an allegory of the software development process. Too often we let our aversion to rework lead us into the trap of excessive original work. Time and time again, I have seen projects that should have only taken weeks be delayed for months to account for features not currently designed or even needed.

Here is a super high-quality slide deck to illustrates this:

Top: Future-Proofing — Bottom: Move Fast and Rework

First take a look at the top section. You see that the top bar is a bare-bones shopping cart microservice (without wishlists, recommendations, coupon codes, etc) that could be built in 2 weeks. But by “future-proofing” it, you can delay it by a month or two to account for the still-future potentiality of wishlists, recommendations, and coupon codes. At the end of two weeks you have nothing. When the first service does finally ship at the end of week eight, you don’t even have the wishlists, recommendations, or coupon codes for which you delayed the whole first service. And not only that, but when you finally implement those features, you find that their final design is not what you expected which causes them to each take an additional two months. Now, at the end of a year, you finally have a late, over-budget, kludgy project that is put in “maintenance mode” simply because no one ever wants to touch it again.

Here is where basic math takes over. If you could build the basic shopping cart service in two weeks as long as you don’t account for anything else, then it stands to reason that it could be rebuilt from scratch in its entirety in the same amount of time. Let’s assume that that adding each additional feature takes two weeks and additionally requires a worst-case full rewrite of all previous features. That sounds like a lot of rework, but look at how the timeline shakes out in the bottom section.

(Move Fast and Rework)

By week two, we shipped the first service and are making money. By week six, we have finished the coupon code service and have entirely rewritten the shopping cart service as well (red bar). Now we are making even more money. By week 12 we have rewritten everything again and have recommendations done. At this point, we have enough user data to realize that users don’t need wishlists, they would be content to just share products they want on social media. Adding social sharing is a two week project that also requires a full rewrite of the shopping cart. Everything is done in 16 weeks.

In this second scenario, we disregarded all future features and doing so caused HALF of the effort to be rework! Please don’t miss this! On the surface those metrics seem awful! Shipping fast causes rework. It’s normal for loss-aversion instincts to kick in try to mitigate rework by future-proofing. But I hope that you can see how doing so is very likely to actually inflate cost even more that rework does.

When to plan, when to ship

OK. Know that I’ve said all that, there are times when you DO need to code with future features in mind. Here is the rule I would suggest:

The best way to future-proof is to “ignore” the future completely and focus solely on known features that are well-defined and ready to develop.

This means that you will eventually end up with a known feature that is ready to develop, but you will have to postpone it to get other, more critical work out the door. It is perfectly reasonably to build with that type of future feature in mind. It’s known. It’s well-defined. It’s only future because you don’t have the bandwidth to work on it right now.

This is in contrast to that nebulous, hand-wavy feature that no one can describe in detail, but everyone will “know it when they see it.” Planning around such an ill-defined idea is a great way to sabotage you present day development to optimize for something far off in the hazy future.

Finally, invest time to keep your SDLC fast and lean so that you can ship now without having to fear rework later. I plan on writing more on this in the future (pun somewhat intended) but the main takeaway is this:

The faster your process, the lesser the negative impact of introducing issues (such as bugs or rework) because a fast process resolves such issues quickly.

Do you have a story about how future-proof ended up biting you in the end? Please feel free to leave a comment. I’d love to hear about it!


Written by stevekonves | I eat food and build things. Opinions are my own.
Published by HackerNoon on 2018/01/04