Configuring Multiple Node Pools

Written by chriscooney | Published 2019/01/09
Tech Story Tags: kubernetes | cluster | multiple-node-pools | node-pools | node-pools-in-aws

TLDRvia the TL;DR App

Customise your EC2 to your deployment needs

Picture the scene. You’ve got a few small, simple microservices. They all run beautifully on 200mb of ram and scale instantly. Engineering bliss. But wait… in this blue sky, a scar in the scenery, lies an omen. A storm cloud in the shape of a monolith. You need to get this beast running in your Kubernetes cluster but it’s dramatically larger than your other applications. In the words of the internet, “wut do?”

Is your Microservice EC2 Instance Type enough?

Your Microservices can be packed into small boxes, but your monolith needs some juice. There simply isn’t enough horsepower in a single instance to run your application as a pod. The purists are going to tell you “Well then, refactor it into microservices”. Reality isn’t often so black and white.

So do you deploy everything onto big boxes?

Your first port of call might be to reassess everything — drain the existing nodes and deploy onto bigger boxes. That way you don’t need to run multiple node pools. This will work! But it comes at a cost. There is a greater chance of wasted resources in your cluster. Bigger boxes mean you’re more likely to have very underutilized instances, costing you money.

Okay, so can I have multiple node types?

Yes you can, you lucky devil. It’s straightforward and requires only a few steps. For the sake of this tutorial, I’m going to assume you’re familiar with Kubernetes concepts like pod or node. If you’re not, you should read through the Kubernetes documentation. Additionally, I’ll assume you’re comfortable with AWS terminology like Autoscaling Group (ASG) and EC2. If you’re not, have a read through the AWS documentation first. For the sake of brevity and clarity, I won’t be including changes to IAM roles and security groups. The official documentation includes these details.

Okay! The Cluster Autoscaler

The first thing you’ll need to hook up is the cluster autoscaler. This has an AWS specific installation mode. It will detect when there is no free space in your EC2 instances to deploy pods, at which point it will increase the number of EC2 instances in your ASG. It works by adjusting the “desired” field in the ASG, up to a predefined maximum that you’ve set. I won’t go over the installation in detail here because the documentation is pretty good. When you’re following these instructions, be sure to set it up for “Autodiscovery”. That way, when you add new ASGs, it will detect them and you won’t need to reconfigure your cluster.

Next, your Autoscaling Group

Spinning up an ASG is straight forward. For the sake of example, we’ll be using Terraform. In the following code snippets, we will create an auto scaling group and a run configuration. The run configuration is the template that your ASG should use when it’s spinning up new EC2 instances. There are some key things to note here.

  1. We’re ignoring changes to desired_capacity. This is because the Cluster autoscaler will be using this value to scale the cluster up and down. We don’t want to reset it back to 2 every time we do a terraform apply.
  2. There are two highlighted tag keys. k8s.io/cluster-autoscaler/enabledis mandatory to ensure that the cluster autoscaler can detect your Auto scaling groups. The second, k8s.io/cluster-autoscaler/cluster_name isn’t mandatory but is a good idea if you’re running multiple clusters. This will prevent two clusters picking up the same ASG.

resource "aws_autoscaling_group" "worker_node_asg" {desired_capacity = 2max_size = 10min_size = 1name_prefix = "worker-node-"lifecycle {ignore_changes = ["desired_capacity"]}tag {key = "Name"value = "worker-node"propagate_at_launch = true}tag {key = "k8s.io/cluster-autoscaler/enabled"value = "true"propagate_at_launch = true}tag {key = "k8s.io/cluster-autoscaler/cluster_name"value = "true"propagate_at_launch = true}}

And the launch configuration (template) for your ASG. Note the instance_type field highlighted in bold.

resource "aws_launch_configuration" "worker_launch_configuration" {image_id = "${data.aws_ami.worker_ami.id}"instance_type = "t2.medium"name_prefix = "worker-node-"security_groups = ["${some_security_group_ID}"]create_before_destroy = true}

Create this infrastructure for each of the EC2 instance types you would like to create. We’ll assume, from here on out, you’ve done it twice for two different instance types.

Let’s make the Pods more picky

Your pods are pretty laid back at the moment — they sigh and say “oh just put me anywhere” as they’re flung at the k8s api. We need to give them a bit of choice and we can do that with node selectors. This will allow the pod to specify the type of node it would like to run on.

Find your label

If you run kubectl get node -o yaml you’ll see some EC2 instances. Look for the beta.kubernetes.io/instance-type label, that’ll let you know what the EC2 instance type is. You can use this in your resource yaml to pick which box you want your applications to deploy to. There are other ways to go about this that don’t pollute your yaml with AWS specific stuff —as always, be mindful. A simple example yaml might look something like this:

apiVersion: v1kind: Podmetadata:  name: nginx  labels:    env: testspec:  containers:  - name: nginx    image: nginx  nodeSelector:    beta.kubernetes.io/instance-type: t2.medium

Using the nodeSelector flag has given this pod a preference — it’s picky. Once you run kubectl apply -f myfile.yaml and push it to the k8s API, kubernetes will find a box with the correct instance type and deploy it. If it doesn’t exist, it will trigger the autoscaling group to create an instance for you.

And there you have it!

You’ve got a cluster that supports multiple EC2 instance types. You can add as many ASGs as you like. Go and take over the world you little scamp!

If you enjoyed this tutorial, I’m regularly writing and throwing other articles about on twitter.


Published by HackerNoon on 2019/01/09