Go Cloud and Cross Platform Lessons Learned

Written by terrycrowley | Published 2018/07/25
Tech Story Tags: cloud-computing | software-engineering | golang | go-cloud | cross-platform

TLDRvia the TL;DR App

Google’s announcement of Go Cloud, a set of libraries for building cross (cloud) platform applications, is remarkable for how it recapitulates all the arguments and lessons about client cross platform.

I should note that having been part of a large company, I experienced how often the news cycle is about how “Microsoft decided this” or “Microsoft thinks that” when often the announcement reflects the thinking and work of just one, often relatively small, group within the larger whole. It often is not some part of a grand company strategy but just a local project and initiative. As a small set of cross-platform libraries for Go applications, this certainly smells like such a project.

One does not have to be overly cynical to recognize the benefit to Google, running a distant third in cloud platforms, to have application writers build applications that can more easily switch between cloud back ends. This reflects the familiar asymmetry of platform competitions. It is in the leader’s best interest to create more differentiated APIs and capabilities that tie applications to its platform. Alternatively, it is in the follower’s best interest to make it easy to transfer those applications to its platform while it also tries to balance its own differentiation strategy. Differentiation along some dimension is obviously necessary for a real business strategy but neutralizing a competitor’s differentiation can be an important part of that strategy.

How does the application writer respond to all this?

A well-written application will often define a thin wrapper around its use of an external API. This helps isolate assumptions about the API and prevent them from permeating the code base. It also helps clarify which parts of a potentially very extensive API are actually in use within the code base. This is just good engineering practice and only secondarily might help move between cloud back ends. It also facilitates faster response to ongoing changes in the external API or changing local decisions about how to use that API which might be solely driven by changing application requirements.

There are many problems with using a third-party layer for this purpose. This cross-platform layer generally needs to have extensive coverage over the external API so you lose the clarity about how your app usage is constrained. Worse, the cross-platform layer is either a lowest common denominator subset of the third party APIs it is trying to cover or some kind of superset over all the backend APIs.

The tendency is for the API to start as a subset that extracts the common elements from the APIs it covers but then grow (based directly on customer demand) to become more and more complex. If the APIs it covers are semantically consistent, the layer can stay simple. But this is exactly the situation where it is also simple for the application writer to do it themselves!

If the layer needs to get complex and “functional”, this strategy starts failing in multiple ways. It obscures mismatches between the application design and the underlying APIs. A classic example from the client world was trying to hide an application’s assumption of fast local storage with much higher latency and failure prone network storage. The layer cannot take advantage of an end-to-end understanding of the specific application requirements to simplify the layer design but needs to provide a more generic component that minimizes application assumptions. In fact, it is the specifics of application requirements that provide some of the most effective opportunities for overall system simplification. These opportunities are thrown away with this strategy.

As the layer gets more functional, it also becomes a regulator of the speed at which underlying innovations can be adopted by the application layer. Again, you get caught in a catch-22 situation. There are two opposing forces. Either the layer is minimal and does not introduce overhead and latency — in which case the application writer could very well do it themselves. Or the layer is highly functional — in which case it introduces significant latency and overhead in adopting platform innovations and adds yet another external dependency. In either case you end up losing.

The other pattern to watch out for is that you start using the third-party layer because it gets you up and running quickly but then start working around it to either access platform innovations more quickly or optimize your own application usage by leveraging your specific requirements. Over time you now have the worse of both worlds — you have an eclectic mix of a binding to a specific back-end service combined with a dependency on an intermediate layer that obscures your usage pattern. I have seen many cases where you look at some evolved application and see layers still carefully used that are now providing no functional purpose beyond obscuring overall application design.

These issues and problems play out at scale — in size of application, size of team, and time frame of application lifecycle. It can be easy to be fooled by quick progress on an early prototype. Anyone who thinks that cloud services are different because the requirements are simpler than client applications hasn’t spent an afternoon recently browsing the mountains of documentation describing the AWS or Azure API sets.


Published by HackerNoon on 2018/07/25