How we keep dependencies fresh across 45+ microservices

Written by kensodev | Published 2017/11/07
Tech Story Tags: docker | devops | python | sre

TLDRvia the TL;DR App

In this post, I will give you the recipe we at Globality use to keep dependencies fresh across 45+ microservices.

Our Services

We have 45+ internal micro-services, 99% of them are written in Python. We have an internal framework called microcosm, which allows for fast convention-over-configuration wiring of components and services.

You can check out all of the microcosm-related projects in this Github link

The problem

If you worked on a medium+ project, whether a monolithic one OR a micro-service you know, that over time, dependencies go stale.

You stop upgrading versions of your dependencies because the process is too complicated and too error-prone.

The solution

In this diagram above, you can see all of the phases **each project** goes through during the branching cycle.

The process is 100% automated and driven by CI and internal scripts.

So, let’s go through each of the processes

phase 1

develop branch is being built on the CI after every merge of the feature branch.

In this phase, we unlock all of the dependencies. Essentially putting . in the requirements.txt file. This forces the build to use all fresh dependencies, upgrading all the minor/major versions of services.

In our setup.py we use >=. This means we always have only a **minimum** version.

What we find?

In that phase, we normally find a dependency that is completely broken, can’t install, crashes, etc…

We also usually find that one of our services is broken, which is then being investigated.

Normally, if we need to make a code change, it’s during this phase and it’s usually minimal since it’s done incrementally.

Phase 2

develop branch is the base, we check out a release/2017.xx.yy branch.

During this time, we unlock all of the dependencies (same as phase one).

Once it’s unlocked and all dependencies are installed, we freeze the dependencies into a requirements.txt file, and we **commit it** back to the repository.

Here’s an example from one of our projects. (I removed internal library names for sanity).

Since we use docker containers (with custom in house layering solution), we ensure that what we test on is the same version that will go forward to staging (and eventually production).

Once dependencies are locked, they are not reopened on that branch

Phase 3

During phase 3, we tag the release/2017.xx.yy branch to it’s own tag. That tag is automatically being deployed to staging using the CI.

During this process we only verify that pip is intact, we don’t install dependencies.

If all the requirements are not met (meaning we need to install something), the build will fail and alert the engineers.

Phase 4

During phase 4, we merge the tag back into develop and the process continues.

Keep it fresh

keeping your dependencies fresh makes sure you are on top of security fixes and security holes that all of your dependencies use.

It makes sure that all of your projects are using the latest of your internal libraries.

Automating this process like we did takes the stress out of it. As an engineer you don’t need to think about it, everything is automated on the CI.

From the QA perspective, you know that if something works on one environment, it will work on the other. If something is broken it’s not in the underlying infrastructure, it’s in application code. You don’t need to worry that cryptography got upgraded under your feet.

Rock on!


Published by HackerNoon on 2017/11/07