Lokomotive Equinix Metal quickstart guide
Introduction
This guide shows how to create a Lokomotive cluster on Equinix Metal , formerly Packet. By the end of this guide, you’ll have a basic Lokomotive cluster running on Equinix Metal with a demo application deployed.
The guide uses c3.small.x86
as the Equinix Metal device type for all created nodes. This is also the
default device type.
NOTE: Visit the Equinix Metal website to see all available device types as well as their pricing.
Lokomotive runs on top of
Flatcar Container Linux
. This guide
uses the stable
channel.
The guide uses Amazon Route 53 as a DNS provider. For more information on how Lokomotive handles DNS, refer to this document.
Lokomotive can store Terraform state locally or remotely within an AWS S3 bucket . By default, Lokomotive stores Terraform state locally.
Lokomotive components
complement the “stock” Kubernetes functionality
by adding features such as load balancing, persistent storage and monitoring to a cluster. To keep
this guide short you will deploy a single component - httpbin
- which serves as a demo
application to verify the cluster behaves as expected.
Requirements
- Equinix Metal account with a project created and local BGP enabled.
- Equinix Metal project ID.
- Equinix Metal user level API key with access to the relevant project.
- An AWS account.
- An AWS access key ID and secret of a user with permissions to edit Route 53 records.
- An AWS Route 53 zone (can be a subdomain).
- An SSH key pair for accessing the cluster nodes.
- Terraform
v0.13.x
installed . kubectl
installed .
NOTE: The
kubectl
version used to interact with a Kubernetes cluster needs to be compatible with the version of the Kubernetes control plane. Ideally you should install akubectl
binary whose version is identical to the Kubernetes control plane included with a Lokomotive release. However, some degree of version “skew” is tolerated - see the Kubernetes version skew policy document for more information. You can determine the version of the Kubernetes control plane included with a Lokomotive release by looking at the release notes .
Steps
Step 1: Install lokoctl
lokoctl
is the command-line interface for managing Lokomotive clusters.
Download the latest lokoctl
binary for your platform:
export os=linux # For macOS, use `os=darwin`.
export release=$(curl -s https://api.github.com/repos/kinvolk/lokomotive/releases | jq -r '.[0].name')
curl -LO "https://github.com/kinvolk/lokomotive/releases/download/${release}/lokoctl_${release}_${os}_amd64.tar.gz"
Extract the binary and copy it to a place under your $PATH
:
tar zxvf lokoctl_${release}_${os}_amd64.tar.gz
sudo cp lokoctl_${release}_${os}_amd64/lokoctl /usr/local/bin
rm -rf lokoctl_${release}_${os}_amd64*
Step 2: Create a cluster configuration
Create a directory for the cluster-related files and navigate to it:
mkdir lokomotive-demo && cd lokomotive-demo
Create a file named cluster.lokocfg
with the following contents:
cluster "equinixmetal" {
asset_dir = "./assets"
cluster_name = "lokomotive-demo"
dns {
zone = "example.com"
provider = "route53"
}
facility = "ams1"
project_id = "89273817-4f44-4b41-9f0c-cb00bf538542"
controller_type = "c3.small.x86"
ssh_pubkeys = ["ssh-rsa AAAA..."]
management_cidrs = ["0.0.0.0/0"]
node_private_cidrs = ["10.0.0.0/8"]
controller_count = 1
worker_pool "pool-1" {
count = 2
node_type = "c3.small.x86"
}
}
# A demo application.
component "httpbin" {
ingress_host = "httpbin.example.com"
}
Replace the parameters above using the following information:
dns.zone
- a Route 53 zone name. A subdomain will be created under this zone in the following format:<cluster_name>.<zone>
project_id
- the Equinix Metal project ID to deploy the cluster in.ssh_pubkeys
- A list of strings representing the contents of the public SSH keys which should be authorized on cluster nodes.
The rest of the parameters may be left as-is. For more information about the configuration options see the configuration reference .
Step 3: Deploy the cluster
NOTE: If you have the AWS CLI installed and configured for an AWS account, you can skip setting the
AWS_*
variables below.lokoctl
follows the standard AWS authentication methods, which means it will use thedefault
AWS CLI profile if no explicit credentials are specified. Similarly, environment variables such asAWS_PROFILE
can be used to instructlokoctl
to use a specific AWS CLI profile for AWS authentication.
Set up your Equinix Metal and AWS credentials in your shell:
export PACKET_AUTH_TOKEN=k84jfL83kJF849B776Nle4L3980fake
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7FAKE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYFAKE
Add a private key corresponding to one of the public keys specified in ssh_pubkeys
to your ssh-agent
:
ssh-add ~/.ssh/id_rsa
ssh-add -L
Deploy the cluster:
lokoctl cluster apply -v
The deployment process typically takes about 15 minutes. Upon successful completion, an output similar to the following is shown:
Your configurations are stored in ./assets
Now checking health and readiness of the cluster nodes ...
Node Ready Reason Message
lokomotive-demo-controller-0 True KubeletReady kubelet is posting ready status
lokomotive-demo-pool-1-worker-0 True KubeletReady kubelet is posting ready status
lokomotive-demo-pool-1-worker-1 True KubeletReady kubelet is posting ready status
Success - cluster is healthy and nodes are ready!
Verification
Use the generated kubeconfig
file to access the cluster:
export KUBECONFIG=$(pwd)/assets/cluster-assets/auth/kubeconfig
kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
lokomotive-demo-controller-0 Ready <none> 33m v1.17.4
lokomotive-demo-pool-1-worker-0 Ready <none> 33m v1.17.4
lokomotive-demo-pool-1-worker-1 Ready <none> 33m v1.17.4
Verify all pods are ready:
kubectl get pods -A
Verify you can access httpbin:
kubectl -n httpbin port-forward svc/httpbin 8080
# In a new terminal.
curl http://localhost:8080/get
Sample output:
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "localhost:8080",
"User-Agent": "curl/7.70.0"
},
"origin": "127.0.0.1",
"url": "http://localhost:8080/get"
}
Using the cluster
At this point you should have access to a Lokomotive cluster and can use it to deploy applications.
If you don’t have any Kubernetes experience, you can check out the Kubernetes Basics tutorial.
NOTE: Lokomotive uses a relatively restrictive Pod Security Policy by default. This policy disallows running containers as root. Refer to the Pod Security Policy documentation for more details.
Cleanup
To destroy the cluster, execute the following command:
lokoctl cluster destroy -v
Confirm you want to destroy the cluster by typing yes
and hitting Enter.
You can now safely delete the directory created for this guide if you no longer need it.
Troubleshooting
Stuck at “copy controller secrets”
...
module.equinixmetal-lokomotive-demo.null_resource.copy-controller-secrets: Still creating... (8m30s elapsed)
module.equinixmetal-lokomotive-demo.null_resource.copy-controller-secrets: Still creating... (8m40s elapsed)
...
In case the deployment process seems to hang at the copy-controller-secrets
phase for a long
time, check the following:
- Verify the correct private SSH key was added to
ssh-agent
. - Verify that you can SSH into the created controller node from the machine running
lokoctl
.
Equinix Metal provisioning failed
Sometimes the provisioning of servers on Equinix Metal may fail, in which case the following error is shown:
Error: provisioning time limit exceeded; the Equinix Metal team will investigate
In this case, retrying the deployment by re-running lokoctl cluster apply -v
may help.
Insufficient capacity on Equinix Metal
Sometimes there may not be enough hardware available at a given Equinix Metal facility for a given machine type, in which case the following error is shown:
The facility ams1 has no provisionable c3.small.x86 servers matching your criteria
In this case, either select a different node type and/or Equinix Metal facility, or wait for a while until more capacity becomes available. You can check the current capacity status on the Equinix Metal API .
Permission issues
If the deployment fails due to insufficient permissions on Equinix Metal, verify your Equinix Metal API key has permissions to the right Equinix Metal project.
If the deployment fails due to insufficient permissions on AWS, ensure the IAM user associated with the AWS API credentials has permissions to create records on Route 53.
Next steps
In this guide you used port forwarding to communicate with a sample application on the cluster. However, in real-world cases you may want to expose your applications to the internet. This guide explains how to use MetalLB and Contour to expose applications on Equinix Metal clusters to the internet.