Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster)

Written by ivanfioravanti | Published 2017/08/25
Tech Story Tags: kubernetes

TLDRvia the TL;DR App

My goal for today is installing a simple Hybrid Kubernetes cluster on Microsoft Azure with:

  • 1 Linux Master
  • 2 Linux Nodes in a LinuxAgentPool
  • 2 Windows Nodes in a WindowsAgentPool

This is something that can’t be accomplished out of the box with the current version of Azure Container Services, but yesterday Brendan Burns confirmed they are working on it!

So how can we install this cluster right now? The answer is ACS-Engine an Open Source project from Microsoft that takes a “cluster definition” in input and returns an ARM template with additional artifacts (i.e. kubectl export files) as output.

Let’s start!

To make thing easier, I’ll reuse some of the artifacts created in Part 1 and Part 2.

  • Create a folder named acs to store all your working artifacts
  • Clone ACS Engine Repository: git clone [https://github.com/Azure/acs-engine](https://github.com/Azure/acs-engine)
  • Go in the Example folder: cd acs-engine/examples where you can find many cluster definition examples. Here you can see a windows folder, it contains the definition needed: kubernetes-hybrid.json

Important note: There is an issue with the current release 0.5.0 that leads to examples in source code not being compatible with it (they are more updated). Here we have 2 solutions:

  • Complex: Compile a new Release locally
  • Easy: Change Examples to make them compatible with old version

We will choose the complex one! Joking… I followed the complex but it’s not trivial on macOS at the moment, because you need to clone acs-engine source in $GOPATH/src and needs a separate article on its own. So let’s go for the easy solution.

  • Open kubernets-hybrid.json file in your favorite editor and change lines below

from:

"servicePrincipalProfile": {"clientId": "","secret": ""}

to:

"servicePrincipalProfile": {"servicePrincipalClientID": "","servicePrincipalClientSecret": ""}

Wait don’t close the file! We want to use latest Kubernetes version in our cluster right? So add line in bold:

"orchestratorProfile": {"orchestratorType": "Kubernetes","orchestratorVersion": "1.7.2"},

Now we can follow steps in Deploy a Kubernetes Cluster that describes a short and long way to deploy your cluster.

Short way drawbacks

Here we will follow the short way but before starting I want to highlight some drawbacks here:

  • DNS prefix: a random number will be added to prefix passed as arguments to make it unique
  • A new application (ServicePrincipal) will be created on you subscription each time you run deploy command and will be left there when you delete ResourceGroup because it’s not part of it
  • A new SSH key pair will be created for deploy

Solution 1: We can avoid all these drawbacks creating artifacts once (application, SSH key pair, DNS) and adding their value in template json file used for deployment. I will do this for SSH.

Solution 2: At the end of the first deploy, acs-engine generates a _output folder with generated artifacts. Here there is a file called azuredeploy.parameters.json that contains all values for parameters that can be copied in the cluster definition file used. That’s all.

Deployment

In previous parts of this article, I created an SSH key pair and I want to use instead of generating a new one. This can be accomplished copying the content of the .pub file in the part below of the json file. Next steps will skip ssh key pair generation because already found in template.

"linuxProfile": {"adminUsername": "azureuser","ssh": {"publicKeys": [{"keyData": "YOURKEY starts with ssh-rsa"}]}},

For deploy you need subscription id you are working on that can be easily retrieved with az account show

./acs-engine deploy — subscription-id YOURSUBSCRIPTIONIDHERE — dns-prefix ivank8stest — location westeurope — auto-suffix — api-model ~/acs/acs-engine/examples/windows/kubernetes-hybrid.json

Note: If you want to avoid passing dns-prefix as argument you can add it to template.json file as we did for keyData same trick can be used for any elements in the file. acs-engine is smart enough to skip steps if data are present. i.e. ServicePrincipalProfile creation.

If everything went well you should see something like:

WARN[0002] apimodel: missing masterProfile.dnsPrefix will use “ivank8stest-59a13555”WARN[0002] apimodel: ServicePrincipalProfile was missing or empty, creating application…WARN[0004] created application with applicationID (11111111-048e-420f-afad-1fe450036077) and servicePrincipalObjectID (22222222-b0fb-4395–9576–6bb18481f88f).WARN[0004] apimodel: ServicePrincipalProfile was empty, assigning role to application…INFO[0030] Starting ARM Deployment (myAcsTest-1222046555). This will take some time…

Coffee break is needed!

INFO[0526] Finished ARM Deployment (myAcsTest2–676730522).

We did it! Our Hybrid Kubernetes Cluster seems up and running! Let’s test it now.

Connect to your Kubernetes cluster

When deployment is completed you should have a _output folder with a subfolder named as dns-prefix of your super cluster. Go in it.

It contains artifacts generated by acs-engine and kubeconfig is really useful for us as described here in step 9. I will try to make it super simple here.

From your terminal run following command point to your _output folder

export KUBECONFIG=~/acs/acs-engine/_output/ivank8stest-59a13ac6/kubeconfig/kubeconfig.westeurope.jsonkubectl get nodes

NAME STATUS AGE VERSION23586acs9010 Ready 1s v1.7.2-4+b0c9ea2463aba423586acs9011 Ready 3s v1.7.2-4+b0c9ea2463aba4k8s-linuxpool1-23586643-0 NotReady 2s v1.7.2k8s-linuxpool1-23586643-1 NotReady 5s v1.7.2k8s-master-23586643-0 NotReady,SchedulingDisabled 7s v1.7.2

It’s up and running and we can use it with our local kubectl!

Everything is up and running, let’s try to deploy some Linux and Windows containers!

Linux Containers

We are going to deploy same azure vote app deployed in Part 1, but with a small change in the yaml file needed to deploy on os:linux nodes.

Here is the content of the azure-vote.yaml file with changes highlighted:

apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-backspec:replicas: 1template:metadata:labels:app: azure-vote-backspec:containers:- name: azure-vote-backimage: redisports:— containerPort: 6379name: redisnodeSelector:beta.kubernetes.io/os: linux---apiVersion: v1kind: Servicemetadata:name: azure-vote-backspec:ports:— port: 6379selector:app: azure-vote-back---apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-frontspec:replicas: 1template:metadata:labels:app: azure-vote-frontspec:containers:— name: azure-vote-frontimage: microsoft/azure-vote-front:redis-v1ports:— containerPort: 80env:— name: REDISvalue: "azure-vote-back"nodeSelector:beta.kubernetes.io/os: linux---apiVersion: v1kind: Servicemetadata:name: azure-vote-frontspec:type: LoadBalancerports:— port: 80selector:app: azure-vote-front

You can now deploy it using: kubectl create -f azure-vote.yaml

Leave Kubernetes working and let’s move to…

Windows Containers

Here we will use artifacts from Part 2 but condensed in a single iisdeploymentfull.yaml file describing the whole deployment.

apiVersion: apps/v1beta1kind: Deploymentmetadata:name: iisspecreplicas: 2template:metadata:labels:app: iisspec:containers:- name: iisimage: microsoft/iisports:- containerPort: 80name: iisnodeSelector:beta.kubernetes.io/os: windows---apiVersion: v1kind: Servicemetadata:name: iisspec:ports:

  • port: 80selector:app: iis

Again we can deploy it using: kubectl create -f iisdeploymentfull.yaml

Now we need to wait few minutes to have everything up and running.

Check status from Kubernetes Dashboard

Connect to your Kubernetes Dashboard through usual proxy channel: kubectl proxy and open a browser: http://127.0.0.1:8001/ui

From here feel free to navigate in the various sections to check status of your cluster, pods, deployments and services.

From Services you will be able to retrive External Endpoints that you can use to test both iis and azure-vote deployments.

Let’s close this article with a quick scale of both deployments to push our cluster:

kubectl scale deployments/azure-vote-front --replicas 100

kubectl scale deployments/iis --replicas 4

Final result:

Kubernetes Hybrid Cluster running on Microsoft Azure

As always you can delete everything with a simple Azure CLI 2 command: az group delete --name myAcsTest --yes --no-wait

We did it again! Now we are Azure Kubernetes Masters and we can easily deploy whatever we want!

My adventure will continue with ingress distributed on Linux and Windows nodes, Monitoring, Autoscale on Azure and much more.


Published by HackerNoon on 2017/08/25