My $0.02 on Is Worse Better?

Written by terrycrowley | Published 2019/03/07
Tech Story Tags: programming | worse-is-better | software-development | software-thoughts | software-essays

TLDRvia the TL;DR App

There is a famous long-running discussion in software engineering that goes under the title “Worse is Better”. I’ve never gotten my two cents in, so I thought I’d talk about it a bit here. This is also an opportunity to try to apply the perspective from my experience with software development at Microsoft.

This discussion was first framed by Richard Gabriel. He characterized the two different approaches as the “MIT” vs. the “New Jersey” approach. These labels came from the approach taken by the Common Lisp and Scheme groups out of MIT and the contrasting Unix approach coming out of Bell Labs in New Jersey. I found the discussion especially interesting because I did 4 internships at Bell Labs while getting my BS and MS at MIT. So I managed to cross over both schools of thought. Of course, Unix itself was in some sense a counter-response to the “MIT approach” that Ken Thompson and Dennis Ritchie were exposed to while working with MIT on the Multics project (my dad led the Bell Labs team working with MIT on Multics and subsequently managed the Bell Labs research team that developed Unix, so I had additional roots across both camps).

For the MIT approach, the essence is “the right thing”. A designer must get the following characteristics right. To paraphrase:

  • Simplicity: the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
  • Correctness: the design must be correct in all observable aspects. Incorrectness is simply not allowed.
  • Consistency: the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
  • Completeness: the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.

The New Jersey or “worse is better” approach is slightly different. To paraphrase:

  • Simplicity: the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
  • Correctness: the design must be correct in all observable aspects. It is slightly better to be simple than correct.
  • Consistency: the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
  • Completeness: the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.

Gabriel viewed his description of the New Jersey approach as a caricature; clearly it was a worse design strategy. So why was it better? He considered it “better” because it was clearly winning — the early rise of Lisp systems out of the MIT / Stanford community did not end well. As a plethora of mini-computers and workstations sprang up, Unix was easy to get up and running on these machines — it was performant and simple and survived well in the rapidly evolving hardware landscape. The Lisp machines were highly tuned hardware/software systems and could not evolve with the rapidly changing hardware and technology landscape.

One of the key divergences between the approaches was clearly the focus on implementation simplicity. In the New Jersey approach, this ends up trumping everything — even correctness! There are a number of consequences of this.

One consequence is a tendency away from building “the perfect jewel” and towards getting something built and out there and used. Clearly this is consistent with all the various software methodologies with guidance that focuses on speed to real usage and driving a feedback process (“ship early and often” or “minimum viable product”).

The approach also enables the developer to be “locally pragmatic” — to be willing to make tradeoffs in complexity rather than being driven to over-invest in characteristics of the system that end up not providing value for cost. Of course there is room for failure in misinterpreting or misapplying “must not be overly inconsistent” — it is not an argument for capricious design. And “locally pragmatic” and “lazy” lie on a continuum.

There are another set of issues that I would group together around leaky abstractions, accidental complexity and overall approachability. In a system with significant internal complexity, there is an inevitable risk of that complexity leaking out in unintended ways. A very common leakage to see is performance inconsistency — where certain ways of using an API are fast and predictable and other ways result in unpredictable performance consequences (in the worst cases turning a predictable local computation into one that involves unbounded remote communications — yes, there are very heavily used Microsoft APIs that do this).

One of the problems I had with the design of Windows Presentation Foundation is they provided great power and flexibility in how visual transformations and animations for building user interfaces could be applied, but some effects were efficient in terms of GPU/CPU communications and memory bandwidth and some were extremely expensive. Without knowing details of the implementation, it was very easy to misuse the system and produce results that did not scale well (across data set size or across devices with different performance characteristics). In this case, the system was providing “artificial consistency” — in form the API appeared consistent but in practice it was not.

The original source of the discussion gives another example of performance inconsistency. Highly tuned systems often have challenges in responding to a rapidly changing technology landscape. It is harder to really leverage rapidly improving hardware that changes important assumptions about how different parts of the system should interact. In practice this failure to adapt is both a business and technology issue. If the cost to adapt to the changing hardware environment is high, the business case must be very strong to warrant the investment. Failures to adapt are usually consequences of a blend of technical and business cost/benefit factors.

Accidental complexity occurs as characteristics of the complex implementation leak out through the interface — client applications and the overall end-to-end system then become dependent on these characteristics which makes the system hard to evolve. It seems intuitively obvious that the more complex the internal implementation is, the harder it is to prevent that complexity from leaking out (and those intuitions are borne out by many examples).

Approachability is a characteristic that might not seem so obvious. Shouldn’t a system/API that strives for consistency and completeness be more approachable? My experience using Unix for many years is that it doesn’t quite play out that way. I was reading a comparison of ASP.Net and PHP at one point that resonated strongly with me (sorry, can’t track down the reference). It talked about reading a book like the “Professional ASP.NET” manual and coming away with the feeling “Wow, those guys are way smarter than me”. In contrast, you read “Programming PHP” and every step and new capability that is revealed seems reasonable and understandable — you understand how you might have built it yourself but you’re thankful that they’ve included it in the system. Obviously that’s not an holistic comparison of these two very different systems but it is relevant to how internal complexity plays out in the important characteristic of approachability.

The success of NodeJS is another example where approachability ends up having a large consequence on ultimate success.

The focus on completeness can be particularly pernicious, especially as it introduces tension around simplicity. Both completeness and consistency lead to increasing the size N of the feature set which leads to growing internal complexity (features interact so complexity grows much faster than linear with the size of the feature set). This internal complexity makes it harder for the component or system to adapt over time to external changes. The goal of consistency can often lead to motivation for significant internal mechanism, especially when a component is layering other elements that might have their own inconsistencies. In fact, for the component builders, this papering over of underlying inconsistencies itself becomes a feature that is being provided. Unfortunately, this consistency almost always comes with performance expense and tradeoff decisions that have now been taken out of the hands of any users of the component. End-to-end arguments in general stress much caution when going down this path.


Published by HackerNoon on 2019/03/07