How Containers Affect DevOps

Written by ashanf | Published 2019/08/03
Tech Story Tags: docker | containers | devops | container-devops | hackernoon-top-story | containers-devops | devops-containers | software-development

TLDR Docker containers have become popular due to the benefits they offer for DevOps. If the software application uses containers, this requires specific steps in DevOps to build and deploy containers. For each new deployment, you need to deploy a new version of the container image that also includes the application code. Containers affect DevOps mainly in two ways: building and moving container images are more straightforward in comparison to virtual machine images. Some DevOps operations can utilize containers to make them more efficient. For container clusters, these environments are handled by a piece of software known as a container orchestrator.via the TL;DR App

Today, we no longer talk about development and operations in isolation. DevOps actively combines these two, which is an essential factor in the modern software lifecycle. Along the way, Docker containers have also become popular due to the benefits they offer for DevOps. Containers affect DevOps mainly in two ways.

DevOps for Containerized Applications

First, if the software application uses containers, this requires specific steps in DevOps to build and deploy containers. Let’s look at the typical steps of a containerized application lifecycle. If we take any containerized application, the application code and the container blueprint (e.g., Docker file) both reside in the same code base. When either the container blueprints or application code changes, it requires to build a new container image. Then, we need to store the container Image in a container registry like DockerHub.
So now you understand that for each new deployment, you need to deploy a new version of the container image that also includes the application code. Think of it as, for each change in the source code, you need to build a fresh virtual machine image, that includes the application code. The good thing here is, these container images are lightweight━mere megabytes, compared to the gigabytes required for a virtual machine image. Therefore building and moving container images are more straightforward in comparison to virtual machine images. I hope I have made my point that deploying containers is different from typical application deployments inside hosted virtual machines.
Now, let’s dive into the details of deploying a container image. To understand this, we also need to look deeper into container runtime environments. These environments, typically called container clusters, are handled by a piece of software known as a container orchestrator.
This abstraction is needed because, we don’t want to deploy a container image locked into a fixed virtual machine (or physical server) for high availability, fault tolerance, and scalability as well as for the portability of containers. Remember, we talked about these containers are lightweight.
You may have heard the names Kubernetes, Docker Swarm or Mesos━these are some of the popular container orchestrators available. If we take Kubernetes, for instance, there are container application platforms which offer built-in support to set up a Kubernetes cluster in minutes. These platforms, like Microsoft Azure, OpenShift Kubernetes, AWS, provides a wide range of features and APIs to simplify the DevOps lifecycle of containers. Handling the underlying complexities in provisioning and managing clusters, cluster nodes, and even providing advanced support for container lifecycle management are some of their unique selling points. Most of these platforms also offer private container registry support and inbuilt CI/CD pipelines.
The orchestrator typically handles the provisioning complexity of deploying new container images to a cluster. To do this, you need to pull the container images from the container registry and provision them.
Note that its not just one container, we are talking about here. It could be tens or even hundreds of different containers that could be running in a single cluster.
So if we take the entire lifecycle described above, this involves a unique set of DevOps operations for building container images, provision containers, and maintain clusters at each step in the lifecycle. In the following section, I’m diving into more detail regarding these steps, from the perspective of DevOps.

Using Containers for DevOps

The second way that containers affect DevOps is that some DevOps operations can utilize containers to make them more efficient.
Building and Publishing Container Images
Building containers are an essential part of DevOps. When it comes to development environments, if the container blueprint (e.g., Docker file) changes, it is mandatory to rebuild the container. Otherwise, it is possible to have an optimized setup where only the application code is built and push into the container running in the development environment. Therefore, it is essential to focus on speeding up the container build step by using scripts and automation.
For container clusters, you need to build the container images outside the development environment in a build server. The CI/CD pipeline typically builds the container image. Properly configured CI/CD pipeline can automatically build the container images and publish them to the container registry as the initial steps.
Some container registries, like DockerHub, simplify the process by providing container Image compilation support. For example, DockerHub provides a clickable option to connect the source code repository (e.g., GitHub) as a trigger to rebuild the image for any code modifications. Publishing the built container image to a container registry is quite straight forward since the container registries typically provide the command line tools or APIs to support it.
Previously, we discussed building the application container image in a host machine. A host machine is typically a development machine or a build server where we have already installed the relevant Docker and application-specific compilation tools.
However, it is also possible to build the container image inside another container, which is one of the use cases for using containers for DevOps.
For instance, using a container to build another container, supports the cross-platform building of the application code and the container image having the exact build environments both in development and build servers.
Besides, in building application container images, inside containers, we can also use containers to run continuous integration (CI) tools like Jenkins.
A more effective way of executing these tests is to do so before merging new code changes to the primary source code repository. Considering Git source control, the best place to perform these tests is when sending a Pull Request. If any test cases fail by then, it should automatically report the status to the Pull Request and prevent it from merging.
Deploying the Container Image to a Cluster
As discussed in the first section of this article, deploying containers to a cluster requires basically to invoke the relevant underlying container platform APIs or orchestrator APIs.
The complex task of scheduling the containers is typically the responsibility of a container orchestrator. Orchestrators provide the requirements for how to define the rules to handle the complexity of scheduling. These rules comprise of the following:
How many instances of a particular container image at runtime. Internal networking rules that require connecting with other containers.Volumes mounted to the containers.Rules specific to container scheduling and lifecycle management on different nodes in the cluster.Rules specific to internal container resource management.
Though this may seem complicated, in terms of DevOps perspective, it is a luxury that the Orchestrator can handle these complexities while we only need to trigger the deployment instruction.
One use case for DevOps using containers as hosts to coordinate the build and deployment of containerized application changes. For example, running Jenkins inside a Docker container could be used for CI/CD. Having multiple instances of Jenkins in containers is especially useful when setting up various CI/CD environments to manage different software projects.
However, this comes with a cost━you’ll need to mount an external volume to keep track of previous build results.
Although I haven’t delved much into post-deployment DevOps operations, containers could also play a significant role there.
For instance, we could deploy containers to monitor other containers as Agents, which work as sidecars to perform cross-cutting operations like log streaming, health, and resource monitoring.

Summary

As you can see, containerized applications benefit from DevOps as well as vice-versa. Since this is an emerging area both in terms of application architecture as well as DevOps, new tools and technologies are continuously coming out to make things more efficient. Therefore, it is essential to keep an eye open to the evolution of existing solutions over time.




Written by ashanf | Solutions Architect
Published by HackerNoon on 2019/08/03