The Microservices Misconception

Written by stephenebly | Published 2017/12/06
Tech Story Tags: microservices | misconception | modularity | tech | scaling

TLDRvia the TL;DR App

These days one can scarcely attend a conference or browse the front page of HackerNews without running into microservices. Proponents tout its preternatural ability to enforce modularity, scale systems, and scale organizations and processes. Of course, microservices is just a rebranding of the older Service-Oriented Architecture paradigm, and no one is quite sure what “micro” means in the first place, but lets ignore that for now. I have seen the same fallacious arguments repeated again and again when it comes to microservices, so I finally decided I had to write a rebuttal.

Modularity

Let’s begin with the most egregious misconception surrounding services. Again and again I hear competent and well-respected programmers telling us that modularity is impossible in monoliths but simple in microservices. It seems over the past several years we as programmers have completely forgotten how software systems were built in the preceding decades. Services are not required, or even beneficial, for modularization.

First let me define modularity, because people may have their own ideas of what it means. Modularity to me means being able to:

  1. Separate parts of the codebase into conceptual units, called modules
  2. Implement these moodules independently
  3. Refactor modules without breaking the code that relies on them.

Note that I specifically do *not* mention having multiple engineers work on different modules in parallel. This is a nice benefit, but it is not the goal. Even a single programmer is fallible, and can only keep so much code in their head at a time. Modularity ensures that we do not break our own contracts, at least not without being aware of it. And that when we refactor that code a year later, whether to add new functionality or simple do some house-cleaning, we don’t break those long-forgotten contracts.

Now with that definition in mind, let me make a claim which is undoubtedly going to anger many people: modularity cannot be achieved without (static) types. It would take a (rather long) article in and of itself to do justice to this claim, but hopefully a few points will suffice: in a dynamic language, I cannot be sure that I implemented my interface correctly, and when refactoring, I cannot be sure that I did not accidentally change the interface. Before I go on, let me say that if you do not accept this base premise, then inevitably the rest of my argument is not going to convince you.

Static programming languages are the vehicle through which modularity is achieved, but of course types alone are not enough. Facilities such as functions, classes, interfaces, and, yes, modules, enable us to write decoupled code with well-defined interfaces that we can plug together, yet implement separately.

As an exercise, let’s say we want to add shopping-cart functionality to our application, which is an example I’ve seen before in regards to microservices. In OCaml, we would write a module with the following signature

module type SHOPPING_CART = sigtype t

val new_empty: tval add_item: t * item -> tval remove_item: t * item -> tval total_price: t -> moneyval quantity: t * item -> intend

If you’re not familiar with an ML language, this is sort of like creating a shopping cart interface in Java. You can use that mental model instead if you’re more comfortable with objects. The point is, this is a well-and-good shopping cart service interface. We didn’t need to create a new code repository, spin up new machines, add more CI plumbing, create a HTTP interface, etc. We can just use it in our codebase to deal with sopping carts. Not only that, but it is completely type-safe.

A JSON API is not nearly as safe — it provides essentially no guarantees. If the structure of the JSON you’re sending or receiving changes, you are SOL. Even if one makes use of json-schema, conformance is still only checked at runtime. I don’t know about you, but I’d much rather catch my errors at compile-time than at 1am on production.

It would be disingenuous to not mention service description languages like Thrift or Protobuf. These are fantastic tools that certainly go a long way to overcoming the shortcomings of REST. If you do write microservices, I recommend them. But they are not a panacea; you still must handle network failures, you have to add a code generation step to your build pipeline, and their type systems are generally not as strong as I would like. Moroever, there is no reason to use them if you don’t have to. Tools like these should be used if you already have to create separate services for some other reason; they should not be used a justification for creating services.

Let me state this plainly: physically separating machines is entirely irrelevant to modularity. The network is not some magical barrier that ensures your engineers write clean code. It is a troublesome monster that likes to flip bits and drop packets. I would much rather my code travel over my computer’s local bus than the web. I do not want to deal with the network unless I absolutely have to.

Programmers have always strived to write modular code. This has been done since before networks even existed, and will continue long after the microservice hype train finally runs out of steam

Systems Scaling

As your user base grows, so must your systems scale to meet the increasing load. There are a multitude of methods to achieve scaling, in every facet of engineering.

For some reason, proponents seem to think the only way to horizontally scale your system is to break it up into pieces. This is entirely untrue. We can scale a single application horizontally by adding more servers and through replication. Any provider — AWS, Heroku, GCP — makes this simple to do, even if you only have a single application. The number of services is completely orthogonal to the number of physical machines you’re running. This doesn’t just apply to stateless servers either — we can also scale, for example, scale our database tier by adding read-only replicas, and scale our elastic search cluster by adding more nodes and shards.

I think vertical scaling is also often overlooked. This solution is financially costly, but requires no changes in your application code or infrastructure. When considering the cost, we must also take into account that that cost may be outweighed by the programmer time wasted dealing with the errors and overhead brought on by microservices

Another overlooked approach to scaling is to actually optimize code! Now granted, this can be quite a lot of work, but using a faster language, a more efficient data structure, or a smarter SQL query can have a huge impact on both latency and throughput.

Services should be the very last step in optimizing a pipeline, after every other avenue has been explored.

Organizational Scaling

I will concede this is one area where independent services might assist. It can in some situations be beneficial to have the code base cut into vertical slices to align with product or features. But most organizations simply do not have enough engineers for this to be necessary. If you consistently have engineers working on multiple different services, then it’s the wrong approach.

Costs

It’s easy to get swept up in the service hype tornado, but let’s not forget the concomitant costs. They are numerous, so I will simply list them in broad strokes:

  • Multiple applications for which you must synchronize deploys and keep versions in sync
  • Local development becomes much more difficult as you have to work on several repositories at the same time, and keep changes in sync between them. Tools like Vagrant and Docker Compose help with this, but it is still more difficult than using a single repo in a single language.
  • You must handle failures over the network, as well as latency issues.
  • You have to add service discovery so your applications can find each other. How easy or difficult this is depends upon the maturity of your ops team and tooling — suffice to say, if you don’t have an ops team, you’re not ready for microservices.
  • In the same vein as the above, you will probably have to set up an orchestration system to distribute your apps across a cluster.
  • You need multiple databases, which means there is no consistent view of your application state or easy way to create transactions.
  • You need a way to correlate transactions across your services for logging purposes.

Our all of these pain points manageable? Of course. But why incur the tremendous cost unless there is a correspondingly massive benefit.

Conclusion

I don’t think microservices will go away. To me they are reminiscent of object-oriented programming: for a time OOP languages were the flavor of the month, used for every task conceivable. Eventually, people realized OOP was not the correct architecture for every application, and there was considerable backlash (thus is the whims of the masses). Classes and objects are still useful, they are just no longer the cornerstone of programming (this can be seen in modern multi-paradigm languages like Scala, Go, and Rust).

As engineers, we must always strive to write the best code we can, creating a stable product that keeps customers and other engineers happy. We should focus our energy on the fundamentals: writing clean, modular code, identifying bottlenecks and making reasonable optimizations, clear communication within and without the engineering team, and focusing on understanding our current technologies, instead of adding throwing enough buzzwords at the technological wall and hoping something sticks.

For the vast majority of organizations, microservices is putting the cart before the unicorn.


Published by HackerNoon on 2017/12/06