Backup Rook Ceph volume on S3 using Velero
Introduction
Rook is a component of Lokomotive which provides storage on Equinix Metal. Taking regular backup of the data to a remote server is an essential strategy for disaster recovery.
Velero is another component of Lokomotive which helps you to backup entire namespaces, including volume data in them.
Learning objectives
This document will walk you through the process of backing up a namespace including the volume in it.
Prerequisites
-
A Lokomotive cluster deployed on a Equinix Metal and accessible via
kubectl
. -
Rook Ceph installed by following this guide .
-
aws
CLI tool installed . -
S3 bucket created by following these instructions .
-
Velero user in AWS created by following these instructions .
-
Velero CLI tool downloaded and installed in the
PATH
.
Steps
Step 1: Deploy Velero
Config
Create a file velero.lokocfg
with the following contents:
component "velero" {
provider = "restic"
restic {
credentials = file("./credentials-velero")
require_volume_annotation = true
backup_storage_location {
provider = "aws"
bucket = "rook-ceph-backup"
region = "us-west-1"
}
}
}
In the above config region
should match the region of bucket created previously using aws
CLI.
Replace the ./credentials-velero
with path to AWS credentials file for the velero
user.
Deploy
Execute the following command to deploy the velero
component:
lokoctl component apply velero
Verify the pods in the velero
namespace are in the Running
state (this may take a few minutes):
$ kubectl -n velero get pods
NAME READY STATUS RESTARTS AGE
restic-c27rq 1/1 Running 0 2m
velero-66d5d67b5-g54x7 1/1 Running 0 2m
Step 2: Deploy sample workload
If you already have an application you want to backup, then skip this step.
Let us deloy a stateful application and save some demo data in it. Save the following YAML config in
a file named stateful.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: demo-ns
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: demo-app
name: demo-app
namespace: demo-ns
spec:
replicas: 1
serviceName: "demo-app"
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 65534
runAsGroup: 65534
containers:
- image: busybox:1
name: app
command: ["/bin/sh"]
args:
- -c
- "echo 'sleeping' && sleep infinity"
volumeMounts:
- mountPath: "/data"
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Execute the following command to deploy the application:
kubectl apply -f stateful.yaml
Verify the application is running fine:
$ kubectl -n demo-ns get pods
NAME READY STATUS RESTARTS AGE
demo-app-0 1/1 Running 0 16s
Execute the following command to generate some dummy data:
kubectl -n demo-ns exec -it demo-app-0 -- /bin/sh -c \
'dd if=/dev/zero of=/data/file.txt count=40 bs=1048576'
Verify that the data is generated:
$ kubectl -n demo-ns exec -it demo-app-0 -- /bin/sh -c 'du -s /data'
40960 /data
Step 3: Annotate pods
NOTE: To backup all pod volumes without having to individually annotate every pod, set the parameter
require_volume_annotation
tofalse
or remove it from the configuration.
Annotate the pods with volumes attached to them with their volume names so that Velero takes backup of volume data. Replace the pod name and the volume name as needed in the following command:
kubectl -n demo-ns annotate pod demo-app-0 backup.velero.io/backup-volumes=data
NOTE: Modify pod template in Deployment
spec
or StatefulSetspec
to always backup persistent volumes attached to them. This permanent setting will render this step unnecessary.
Step 4: Backup entire namespace
Execute the following command to start the backup of the concerned namespace. In our demo
application’s case it is demo-ns
:
velero backup create backup-demo-app-ns --include-namespaces demo-ns --wait
Above operation may take some time, depending on the size of the data.
Step 5: Restore Volumes
Same Cluster
If you plan to restore in the same cluster, then delete the namespace. In case of our demo application run the following command:
kubectl delete ns demo-ns
NOTE: If you are restoring a stateful component of Lokomotive like
prometheus-operator
, then delete the component namespace by runningkubectl delete ns monitoring
.
Different Cluster
In another cluster deploy components rook
, rook-ceph
and velero
with the same configuration
for a successful restore.
Restore
Execute the following command to start the restore:
velero restore create --from-backup backup-demo-app-ns
Verify if Velero restored the application successfully:
$ kubectl -n demo-ns get pods
NAME READY STATUS RESTARTS AGE
demo-app-0 1/1 Running 0 51s
NOTE: If you are restoring a stateful component of Lokomotive like
prometheus-operator
, then once pods inmonitoring
namespace are inRunning
state, then runlokoctl component apply prometheus-operator
to ensure the latest configs are applied.
Verify that the data is restored correctly:
$ kubectl -n demo-ns exec -it demo-app-0 -- /bin/sh -c 'du -s /data'
40960 /data
Additional resources
- Velero Restic Docs .
- Lokomotive
velero
component configuration reference document . - Lokomotive
rook
component configuration reference document . - Lokomotive
rook-ceph
component configuration reference document .