How to implement Continuous Integration and Continuous Delivery in your organization

Written by prashantramnyc | Published 2018/01/29
Tech Story Tags: continuous-integration | agile-methodology | git | continuous-delivery | devops

TLDRvia the TL;DR App

Tool set to transition to Continuous Integration and Continuous Delivery CI/CD, and understanding Git, GitFlow and pull requests for code review

This article describes our journey in transitioning from a traditional Agile sprint team to an Agile Continuous Integration(CI/CD) and Delivery model. The transition exercise took approximate two to three months to fully integrate into the work flow.

The first section describes the initial landscape in terms of the enterprise product and the intial traditional agile sprint process. The sections following it describe our journey, experimentation, possible alternatives and lessons learnt from this experience. Towards the end we explore further improvement considerations in this workflow model.

The Initial Landscape

The Product

The product was a large enterprise software system for the education industry. The system was actively used by different departments, from management, to teacher, to parents, to students, staff, faculty etc. and includes using it for day-to-day tactical management(attendance, tardiness, homework, etc.), to scheduling(vacations, sick days etc.), to performance recording(teachers performance, student performance etc.), to maintaining a large database of students, teachers, faculty, staff etc. for different schools; and a whole lot more. Needless to say it is a huge huge (notice i used huge twice) enterprise system.

The venture was to maintain this enterprise product and to incrementally add new offerings, and features as required by the various departments, and to seamlessly integrate into this large enterprise software system.

The Team

The team size was approximately twenty team members. This included developers, business analysts, project managers, and QA. The team was a mix of vendors, sub-contractors and in-house developers, based in dispersed geographical regions and in disparate time zones.

The Technology

The product was a large monolith enterprise system, backend in Dot Net with SQL database, and the front end using various javascript libraries including KendoUI, JQuery and others. The system interacted with some external platforms and systems via APIs.

The Traditional workflow

We used the traditional Agile approach to releases, with release happening at the end of every sprint cycle. Each sprint averaged 3–4 weeks and the release would include features required by more than one department. Thus in the same release some subset of new features were for dept A, whereas other features were for dept B, C or others. We used traditional SVN as code repository and for version control.

The Issues

  • We were able to release in sprints, however it was not quick enough for the users. Also different departments had different priorities and needed different release dates for their respective features, and we could not have one fixed end of sprint release date that would suit the disparate deadline of the different departments.
  • We needed a more efficient way to manage code merges and coordinate between the large team of developers. We wanted to avoid the issues of merge hell, as we sometimes had 3–5 developers working on the same modules.
  • We needed an effective way to improve code quality and identify bugs early in the codebase.
  • We did not have one seamless pipeline for work flow and used different platforms to coordinate activities; SVN for code repo, basecamp for asset sharing, jira for project tracking,etc. We wanted to integrate everything into a single seamless workflow pipeline.

So while we used Agile guidelines to the best possible extent, and adhered to the agile principles of sprint, scrum, backlog etc., we felt we could do a lot more in terms of improving our workflow, code quality and time to market.

Benefits we wanted out of this transition

  • Quicker time to market for new features
  • Improved code quality
  • Ability to deploy/release at different intervals to coincide with the schedules of a particular department, instead of having to wait till end of sprints
  • Improved coordination between team members to maximize output
  • Better workflow and improved visibility into status of new features

The Transition

We began our journey by first understanding the core workflow infrastructure we would need to have in place. We quickly understood that to establish a Continuous Integration pipeline we would need at a minimum the following components in our work flow,

  • a source control repository, that allowed for both local and remote source control
  • an automated build agent that polled the source repo, and triggered build whenever new code was checked into the repo
  • automated testing tools, for build tests and smoke test
  • deployment tools for configuration management and automation (IaC, Infrastructure as Code)

We realized that the build agent was central to the CI process as it was the agent that really coordinated the triggering of the build process, the checking of the build results and initiating next steps.

We then made a list of the different products and tools available and how they related to each other.

  • Versioning Tools: SVN, Mercurial, Git
  • Version Control GUI: SourceTree(by Atlassian), GitHub Desktop(by Github)
  • Version Control Repo Hosting Service: GitHub, BitBucket, Stash (BitBucket Server)
  • Continuous Integration Agents: Jenkins, TeamCity, Circle CI, Bamboo CI
  • JS Build Tools/JS Task Runners: Grunt, Gulp, Webpack
  • Deployment Tools (Infrastructure as Code): Docker, Chef, Puppet, Ansible, SaltStack, AWS Cloudformation
  • Automated Testing: Selenium

Source Tree, BitBucket, Stash and where it all fit in

After a detailed study and analysis of the available options, the plan was as follows,

  • Use Git as the Versioning control tool.
  • Use Stash (by Atlassian) as the Version control hosting service, and use SourceTree(by Atlassian) as the Git GUI.
  • Use Bamboo CI as the continuous integration agent. Bamboo CI would continuously poll the Stash repo, and as and when there as an updated master on the Stash repo, it would trigger a new build process and run automated tests.
  • Use Selenium as the automated testing tool.
  • Use Grunt as the task runner for any front end javascript code.
  • Use Chef as the configuration management tool, eventually the plan was to move to AWS Cloud Formation.
  • Use the Gitflow workflow, for code development.

Practicing the GitFlow Workflow

The most crucial element of the transition to the CI model was to have the team become well versed in using Git, SourceTree, Stash, and the GitFlow workflow.

GitFlow is a branching model for Git, created by Vincent Driessen. More information on the GitFlow work flow can be found here.

Training on the Gitflow workflow was the most challenging part of the transition. In the past the team had only used SVN as the version control and so this was a significant paradigm shift. Each developer had to first understand how Git does versioning and learn the commands associated with it. Building on this foundation the team then delved into understanding the GitFlow process model. This included understanding how the different branches were to be used, how to generate pull requests for code reviews, resolve merge conflicts, and merging to master, etc.

The team did about fifteen to twenty mock practice runs with pseudo projects, each ranging from a few hours to a couple of days to help everyone really understand the new paradigm.

This entire process of Gitflow training took about 2–3 weeks, however once indoctrinated the GitFlow workflow allowed us to manage code quality, merges, hot fixes, deployments etc. in a timely and efficient manner.

Here is a brief summary of how we used the GitFlow work flow.

GitFlow workflow; courtesy of Atlassian

We had two primary branches of the code (a) Master Branch (b) Development Branch

The Development Branch was forked off the Master Branch. The master branch was the single source of truth and an abridged version of the commits that were done in the Development branch.

The Master Branch was the code in production. If any bugs were found in production, an analysis was made if it was a quick fix or needed further investigation. Any quick fixes were handled by branching a Hotfix Branch off the Master Branch. Multiple production bugs were handled in a single Hotfix branch, we did not as a protocol branch off feature branches off the Hotfix branch. Hotfix branches were merged to the Master Branch and Development Branch. If we had a Release Branch, the Hotfix was additionally merged onto the Release branch.

We forked the Release Branch off the Development Branch whenever we were ready to deploy an incremental update to the enterprise system. The User Acceptance testing and related bug fixes were done on this Release Branch. As a protocol we limited the life span of the Release branch from a few hours to not more than a couple of days.

Pull requests were central to our work flow in terms of code review and managing code quality. Since the team consisted of a mix of senior and mid-level developers, we established protocols for pull requests and merge to master. As a protocol pull requests could be approved by any one of the senior developers only. The senior developers could then merge to master, and if there were merge conflicts, they would tap the related developer to fix the merge conflicts and update the same pull request. Once the merge conflicts were resolved and the pull request approved by the senior developers, either the developer or the pull request approver could merge the code to the Development branch.

Each developer had Git on their local machines, and pulled from the Development Branch repo on Stash(BitBucket Server), onto their local machines.

All new features were branched off as new feature branches from the local machine copy of the Development Brach. The developers called the local copy of the Development Branch on their local machine as the local master. Thus all new features were branched off the local master; the new features were worked on in the respective feature branches on the local copy; then merged back to the master local.

For two developers John and Mary the GitFlow process looked something like this,

  1. When Mary was ready to begin work on a new feature, Mary first ensured that her local master is up-to-date with the remote master (Development Branch). i.e. Mary pulled from the Stash Development Branch to her local master.

$ git pull origin master

2. Mary then created a new feature branch on the local machine.

$ git checkout -b feature1

3. Mary then pushed this new branch to the remote Stash repo.

$ git push -u origin feature1

4. Mary then continued work on feature1, and did regular commits to local, and occasionally pushed the local feature1 branch to the Stash repo.

As a protocol every developer did at least one daily push to the Stash repo, however developers were encouraged to do multiple pushes to the Stash repo, throughout the day.

Also, Mary only pushed her feature1 branch to the Stash repo, and not her local master. In the meantime it was possible that the remote master (Development Branch) was updated by other developers merging new features into it, thus making Mary’s local master out of date compared to the remote master (Development Branch). However, Mary would continue development on her local feature branch, unconcerned and continue to push to Stash i.e. the remote feature1 branch off the Development Branch.

# The remote master is updated by some other developer# Mary continues work in her feature1 branch unconcerned that the her local master is out dated compared to the remote master

$ git add .$ git commit -m "These are the updates in this commit"

$ git add .$ git commit -m "These are the some additional updates"# Mary continues work on her local feature1 branch, doing a local commit multiple times.

$ git push origin feature1

4. When Mary felt that feature1 was ready to be merged with the Development Branch, Mary went to Stash and opened a pull request on the feature1 branch, notifying one or more of the other developers. Let us assume Mary notified John of the pull request. In this case John would do the following,

$ git fetch feature1# This would pull the feature1 branch from remote and make a local copy of feature1 in John's local machine

$ git checkout feature1# John would then code review, if everything looked good John would close the pull request and merge Mary's code with the Development Branch.

5. While merging Mary’s feature1 branch to the Development Branch it is possible that John would encounter a “merge conflict”, if Mary’s local master was out of date with the remote master(Development Branch). In this case John would notify Mary and Mary would take the following steps to resolve the merge conflicts.

$ git checkout master$ git pull$ git merge feature1# Git would show the potential conflicts which Mary would resolve

$ git merge feature1# After resolving merge conflicts Mary would again attempt to merge feature1 to local master

$ git push# Mary would push local master to remote master (Development Branch)

Deployment

We explored the following deployment models

  • Rolling deployment
  • Blue Green Deployment

Also, we attempted to understand Canary Deployment and how we wanted to implement A/B Testing.

Difference between Canary Testing and A/B Testing

In Canary Deployment a small subset of users are exposed to the new feature. In Canary Testing the primary focus is to see how the infrastructure responds to the new feature. Here the intention is that the new feature will eventually be rolled out onto all the production Users .

In A/B testing one group of users see feature A and the other sees a complementing or the alternative feature B, and the intention is to ascertain which strategy works best A vs B, based on the way the users interacts with A or B. eg. Does version A of a page give better conversion ration for users visiting the page, or does version B. In A/B Testing the intention is to test which of the two features A or B within the product do the customers respond to favorably.

We used the Blue Green Deployment model for deployment, and ran two full production environments. The Blue Green Deployment model worked well for us since it (a) allowed us to run quick smoke and UAT tests on the updates in a production environment before we switched to it (b) Allowed for quick switching to the updated and tested deployed code.

Concluding Remarks and Next Steps

The transition from traditional agile to agile CI/CD needs a paradigm shift, and needs full commitment from all the stakeholders, from the project managers, to scrum master, to the developers and the clients. The transition needs to be planned and the development team made aware of the reason and intents of the change. Implementing Gitflow workflow and pull requests made a huge difference to our output and to the way the team created and managed code.

In terms of next steps we are now exploring ways to decompose this large monolith application into manageable microservices. This is a huge endevour and exercise by itself and there are several challenges. However transitioning to an Agile CI/CD pipeline has given the team confidence to start to tackle this hard problem.

Also down the road we want to migrate to AWS services and employ AWS Code deploy pipeline for our code deployment. However these ventures are further down the time horizon.

Found this post useful? Hit the 👏 button below to show how much you liked it! :)

Follow me on Medium for the latest updates and posts!

Read Next: The Microservices Approach to React Native Mobile Application Development

Other Articles:


Published by HackerNoon on 2018/01/29