How does a Kubernetes setup the Hab(itat) way look? In this blog post we will explore how to use Habitat’s application automation to set up and run a Kubernetes cluster from scratch, based on the well-known “ Kubernetes The Hard Way ” manual by Kelsey Hightower .

A detailed README with step-by-step instructions and a Vagrant environment can be found in the Kubernetes The Hab Way repository.

Kubernetes Core Components

To recap, let’s have a brief look on the building blocks of a Kubernetes cluster and their purpose:

  • etcd , the distributed key value store used by Kubernetes for persistent storage,
  • the API server, the API frontend to the cluster’s shared state,
  • the controller manager, responsible for ensuring the cluster reflects the configured state,
  • the scheduler, responsible for distributing workloads on the cluster,
  • the network proxy, for service network configuration on cluster nodes and
  • the kubelet, the “primary node agent”.

For each of the components above, you can now find core packages on bldr.habitat.sh . Alternatively, you can fork the upstream core plans or build your own packages from scratch. Habitat studio makes this process easy.

By packing Kubernetes components with Habitat, we can use Habitat’s application delivery pipeline and service automation, and benefit from it as any other Habitat-managed application.

Deploying services

Deployment of all services follows the same pattern: first loading the service and then applying custom configuration. Let’s have a look at the setup of etcd to understand how this works in detail:

To load the service with default configuration, we use the hab sup subcommand:

$ sudo hab sup load core/etcd --topology leader

Then we apply custom configuration. For Kubernetes we want to use client and peer certificate authentication instead of autogenerated SSL certificates. We have to upload the certificate files and change the corresponding etcd configuration parameters:

$ for f in /vagrant/certificates/{etcd.pem,etcd-key.pem,ca.pem}; do sudo hab file upload etcd.default 1 "${f}"; done

$ cat /vagrant/config/svc-etcd.toml
etcd-auto-tls = "false"
etcd-http-proto = "https"

etcd-client-cert-auth = "true"
etcd-cert-file = "files/etcd.pem"
etcd-key-file = "files/etcd-key.pem"
etcd-trusted-ca-file = "files/ca.pem"

etcd-peer-client-cert-auth = "true"
etcd-peer-cert-file = "files/etcd.pem"
etcd-peer-key-file = "files/etcd-key.pem"
etcd-peer-trusted-ca-file = "files/ca.pem"

$ sudo hab config apply etcd.default 1 /vagrant/config/svc-etcd.toml

Since service configuration with Habitat is per service group, we don’t have to do this for each member instance of etcd. The Habitat supervisor will distribute the configuration and files to all instances and reload the service automatically.

If you follow the step-by-step setup on GitHub, you will notice the same pattern applies to all components

Per-instance configuration

Sometimes, each instance of a service requires custom configuration parameters or files, though. With Habitat all configuration is shared within the service group and it’s not possible to provide configuration to a single instance only. For this case we have to fall back to “traditional infrastructure provisioning”. Also, all files are limited to 4096 bytes which is sometimes not enough.

In the Kubernetes setup, each kubelet needs a custom kubeconfig, CNI configuration , and a personal node certificate. For this we create a directory (/var/lib/kubelet-config/) and place the files there before loading the service. The Habitat service configuration then points to files in that directory:

$ cat config/svc-kubelet.toml
kubeconfig = "/var/lib/kubelet-config/kubeconfig"

client-ca-file = "/var/lib/kubelet-config/ca.pem"

tls-cert-file = "/var/lib/kubelet-config/node.pem"
tls-private-key-file = "/var/lib/kubelet-config/node-key.pem"

cni-conf-dir = "/var/lib/kubelet-config/cni/"

Automatic service updates

If desired, Habitat services can be automatically updated by the supervisor once a new version is published on a channel by loading a service with --strategy set to either at-once or rolling. By default, automatic updates are disabled. With this, Kubernetes components can be self-updating. An interesting topic that could be explored in the future.

Conclusion

We have demonstrated how Habitat can be used to build, setup, and run Kubernetes cluster components in the same way as any other application.

If you are interested in using Habitat to manage your Kubernetes cluster, keep an eye on this blog and the “Kubernetes The Hab Way” repo for future updates and improvements. Also, have a look at the Habitat operator , a Kubernetes operator for Habitat services, that allows you to run Habitat services on Kubernetes.

Related Articles