AKS Engine is a powerful open source tool to deploy and manage a Kubernetes cluster on Microsoft Azure. While it is best known as the foundation of Microsoft’s own managed Azure Kubernetes Service (AKS), it is also used directly by users who want more control over their clusters.

For example, while AKS is restricted to using Ubuntu or Windows, AKS Engine allows you to specify other operating systems at install time. Why would you want to do that? Well, to get the benefits of a minimal, immutable, container-optimized Linux like Flatcar Container Linux :

  • increased security and manageability at scale
  • reduced attack surface area
  • auto-updates with the latest patches
  • optimized for Azure and Kubernetes.

Indeed, the National Institute of Standards and Technology (NIST) advises the deployment of containerized applications on container-optimized OS like Flatcar: “Whenever possible, organizations should use these minimalistic OSs to reduce their attack surfaces and mitigate the typical risks and hardening activities associated with general-purpose OSs.”

So you’re convinced and want to get started? In the rest of this blog, we’ll dig into how to deploy a Kubernetes cluster using AKS Engine, with Flatcar Container Linux as the base operating system for its worker nodes.

Authenticate to Azure

The first step to deploying an AKS Engine cluster is to authenticate to Azure. Follow the instructions to install the Azure CLI and sign in to your Azure account.

Download aks-engine binary

Download and extract the latest release of AKS engine from the GitHub releases page , at the time of writing it was v0.57.0:

wget https://github.com/Azure/aks-engine/releases/download/v0.57.0/aks-engine-v0.57.0-linux-amd64.tar.gz
tar xvf aks-engine-v0.57.0-linux-amd64.tar.gz

Install Terraform

To make this blog post reproducible we use Terraform to create the needed Azure resources. Follow the instructions to download and install it.

Create service principal and resource group

To deploy an AKS engine cluster, you need to create a resource group where the cluster resources live and a service principal to access the resources. We do this as follows.

Note: creating a service principal is a privileged operation so you might have to contact your Azure Active Directory admin to create one.

Create a file named azure-setup.tf with the following contents:

# providers
provider "azurerm" {
  features {}
}

terraform {
  required_version = ">= 0.13"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "2.38.0"
    }
    azuread = {
      source  = "hashicorp/azuread"
      version = "1.1.1"
    }
    random = {
      source  = "hashicorp/random"
      version = "3.0.0"
    }
  }
}

# variables
variable "subscription_id" {}
variable "resource_group_location" {}
variable "cluster_name" {}

# resources
resource "azuread_application" "aks_engine_cluster" {
  name = var.cluster_name
}

resource "azuread_service_principal" "aks_engine" {
  application_id = azuread_application.aks_engine_cluster.application_id
}

resource "azurerm_resource_group" "resource_group" {
  name     = var.cluster_name
  location = var.resource_group_location
}

resource "random_string" "password" {
  length           = 16
  special          = true
  override_special = "/@\""
}

resource "azuread_application_password" "ad_password" {
  application_object_id = azuread_application.aks_engine_cluster.object_id
  value                 = random_string.password.result
  end_date_relative     = "86000h"
}

resource "azurerm_role_assignment" "main" {
  scope                = "/subscriptions/${var.subscription_id}"
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aks_engine.id
}

# outputs
output "client-id" {
  value = azuread_application.aks_engine_cluster.application_id
}

output "client-secret" {
  value = azuread_application_password.ad_password.value
}

Before running Terraform we need to define 3 variables:

  • subscription_id: The ID of your Azure Subscription. You can see your existing Azure subscriptions and their IDs in the Azure Portal .
  • resource_group_location: The region where the cluster is deployed, for example norwayeast. You can get a list of locations available to your account by running az account list-locations -o table.
  • cluster_name: The name of your cluster.

Define them as environment variables as they are needed in several commands:

export TF_VAR_subscription_id=11111111-2222-3333-4444-555555555555
export TF_VAR_resource_group_location=norwayeast
export TF_VAR_cluster_name=test-flatcar-cluster

Next, run Terraform:

terraform init
terraform apply

When Terraform is finished it outputs the client-id and client-secret variables. They will be used later so store them in environment variables too:

CLIENT_ID="<client-id>"
CLIENT_SECRET="<client-secret>"

Create Kubernetes cluster definition

Next step is creating a Kubernetes cluster definition. This is a JSON file that defines and configures the AKS Engine cluster.

Create a file named kubernetes-flatcar.json with the following contents:

{
 "apiVersion": "vlabs",
 "properties": {
   "orchestratorProfile": {
     "orchestratorType": "Kubernetes",
     "kubernetesConfig": {
       "networkPlugin": "kubenet"
     }
   },
   "masterProfile": {
     "count": 1,
     "dnsPrefix": "",
     "vmSize": "Standard_D2_v3",
     "distro": "ubuntu"
   },
   "agentPoolProfiles": [
     {
       "name": "agentpool1",
       "count": 3,
       "vmSize": "Standard_D2_v3",
       "availabilityProfile": "AvailabilitySet",
       "distro": "flatcar"
     }
   ],
   "linuxProfile": {
     "adminUsername": "core",
     "ssh": {
       "publicKeys": [
         {
           "keyData": ""
         }
       ]
     }
   },
   "servicePrincipalProfile": {
     "clientId": "",
     "secret": ""
   }
 }
}

Note that this cluster definition uses Flatcar Container Linux for worker nodes but Ubuntu for the master node.

Currently, Flatcar Container Linux is only supported on worker nodes. There are some cloud-init directives used in control plane provisioning that are not supported by coreos-cloudinit , Flatcar’s minimal cloud-init implementation. This Azure/aks-engine issue tracks Flatcar support for the control plane.

Deploy AKS Engine cluster

Deploy the AKS Engine cluster using the aks-engine CLI tool:

aks-engine-v0.57.0-linux-amd64/aks-engine deploy --api-model kubernetes-flatcar.json \
    --dns-prefix "$TF_VAR_cluster_name" \
    --resource-group "$TF_VAR_cluster_name" \
    --location "$TF_VAR_resource_group_location" \
    --client-id "$CLIENT_ID" \
    --client-secret "$CLIENT_SECRET" \
    --set servicePrincipalProfile.clientId="$CLIENT_ID" \
    --set servicePrincipalProfile.secret="$CLIENT_SECRET"

After approximately 10 minutes the cluster should be deployed.

To access the cluster, you can find its kubeconfig in ./_output/<cluster-name>/kubeconfig/kubeconfig.<location>.json

Let’s use the kubeconfig to check that the cluster is accessible and that worker nodes are running Flatcar Container Linux:

export KUBECONFIG="$PWD/_output/$TF_VAR_cluster_name/kubeconfig/kubeconfig.$TF_VAR_resource_group_location.json"
kubectl get nodes -o wide

The output should show all nodes as Ready:

NAME                        STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-agentpool1-37781349-0   Ready    agent    3m9s   v1.18.9   10.240.0.4     <none>        Flatcar Container Linux by Kinvolk 2605.8.0 (Oklo)   5.4.77-flatcar      docker://19.3.12
k8s-agentpool1-37781349-1   Ready    agent    3m9s   v1.18.9   10.240.0.6     <none>        Flatcar Container Linux by Kinvolk 2605.8.0 (Oklo)   5.4.77-flatcar      docker://19.3.12
k8s-agentpool1-37781349-2   Ready    agent    3m9s   v1.18.9   10.240.0.5     <none>        Flatcar Container Linux by Kinvolk 2605.8.0 (Oklo)   5.4.77-flatcar      docker://19.3.12
k8s-master-37781349-0       Ready    master   3m9s   v1.18.9   10.240.255.5   <none>        Ubuntu 16.04.7 LTS                                   4.15.0-1100-azure   docker://19.3.12

From this point on you can start using your Kubernetes AKS Engine cluster running on Flatcar Container Linux!

Cleanup

To clean up the cluster and all its associated resources, run:

terraform destroy

Conclusion

In this blog post we showed how to deploy a Kubernetes cluster with AKS Engine on Flatcar Container Linux. In a future post, we will show how to deploy clusters using the new Cluster API support for Azure (CAPZ), just as soon as Flatcar support is integrated there: watch this space.

Thank you to all our friends at Microsoft, for their help in enabling Flatcar Container Linux in Azure and AKS Engine.

If you encounter problems, please let us know by filing an issue . See our contact page for inquiries about support.

Related Articles