Backup a cluster with Velero

Velero is a popular open source backup solution for Kubernetes. Its core implementation is a controller running in the cluster that oversees the backup and restore operations. The administrator is given a CLI tool to schedule operations and/or perform on-demand backup and restores. This CLI tool creates Kubernetes resources that the in-cluster Velero controller acts upon. During installation the controller needs to be configured with a repository (called a ‘provider’), where the backup files are stored.

This document describes how to setup Velero with the MinIO provider acting as an S3 compatible object store.

Prerequisites

Enabling required components

DNS and helm are needed for this setup:

sudo microk8s enable dns
sudo microk8s enable helm3

Install MinIO

MinIO provides an S3 compatible interface over storage provisioned by Kubernetes. For the purposes of this guide, the hostpath storage add-on is used to satisfy the persistent volume claims:

sudo microk8s enable hostpath-storage

Helm is used to setup MinIO under the velero namespace:

sudo microk8s kubectl create namespace velero
sudo microk8s helm3 repo add minio https://helm.min.io
sudo microk8s helm3 install -n velero --set buckets[0].name=velero,buckets[0].policy=none,buckets[0].purge=false minio minio/minio

Create a demo workload

The workload we will demonstrate the backup with is an NGINX deployment and a corresponding service under the workloads namespace. Create this setup with:

sudo microk8s kubectl create namespace workloads
sudo microk8s kubectl create deployment nginx -n workloads --image nginx
sudo microk8s.kubectl expose deployment nginx -n workloads --port 80

Installing Velero

To install Velero we get the a binary from the releases page on github and place it in our PATH. In this case we install the v1.7.1 Linux binary for AMD64 under /usr/local/bin:

wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.1/velero-v1.7.1-linux-amd64.tar.gz 
tar -xzf velero-v1.7.1-linux-amd64.tar.gz
chmod +x velero-v1.7.1-linux-amd64/velero
sudo chown root:root velero-v1.7.1-linux-amd64/velero
sudo mv velero-v1.7.1-linux-amd64/velero /usr/local/bin/velero

Before installing Velero, we export the kubeconfig file from MicroK8s.

mkdir -p $HOME/.kube
sudo microk8s config > $HOME/.kube/config

We also export the MinIO credentials so we can feed them to Velero.

ACCESS_KEY=$(sudo microk8s kubectl -n velero get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode)
SECRET_KEY=$(sudo microk8s kubectl -n velero get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)
cat <<EOF > credentials-velero
[default]
    aws_access_key_id=${ACCESS_KEY}
    aws_secret_access_key=${SECRET_KEY}
EOF

We are now ready to install Velero:

velero install \
--use-restic \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000 \
--snapshot-location-config region=minio

Velero uses Restic for backing up Kubernetes volumes. To let Restic know of the kubelet directory in the MicroK8s context we need to patch its daemonset manifest:

sudo microk8s kubectl -n velero patch daemonset.apps/restic --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/volumes/0/hostPath/path", "value":"/var/snap/microk8s/common/var/lib/kubelet/pods"}]'

Backup workloads

To backup the workloads namespace we use the --include-namespaces argument:

 velero backup create workloads-backup --include-namespaces=workloads

Note: Please, consult the official Velero documentation on how to backup persistent volumes, the supported volume types and the limitations on hostpath.

To check the progress of a backup operation we use describe, providing the backup name:

 velero backup describe workloads-backup 

In the output you should see this operation completed:

Name:         workloads-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.23.3-2+3cea96839f0d64
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=23+

Phase:  Completed

Errors:    0
Warnings:  0

Namespaces:
  Included:  workloads
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2022-02-08 10:44:08 +0200 EET
Completed:  2022-02-08 10:44:10 +0200 EET

Expiration:  2022-03-10 10:44:08 +0200 EET

Total items to be backed up:  17
Items backed up:              17

Velero-Native Snapshots: <none included>

Restore workloads

Before restoring the workloads namespace, let’s delete it first:

 sudo microk8s.kubectl delete namespace workloads

We can now create a restore operation specifying the backup we want to use:

velero restore create --from-backup workloads-backup

A restore operation which we can monitor using the describe command is then created:

velero restore describe workloads-backup-20220208105156

The describe output should eventually report a “Completed” phase:

Name:         workloads-backup-20220208105156
Namespace:    velero
Labels:       <none>
Annotations:  <none>

Phase:                       Completed
Total items to be restored:  10
Items restored:              10

Started:    2022-02-08 10:51:56 +0200 EET
Completed:  2022-02-08 10:51:57 +0200 EET

Backup:  workloads-backup

Namespaces:
  Included:  all namespaces found in the backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  auto

Preserve Service NodePorts:  auto

Listing the resources of the workloads namespaces confirms that the restoration process was successful:

sudo microk8s kubectl get all -n workloads

Summing up

Although Velero is a really powerful tool with a large set of configuration options it is also very easy to use. You are required to set up a backup strategy based on the backend that will hold the backups and the scheduling of the backups. The rest is taken care of by the tool itself.

I think one important option is missing in the velero install arguments: “–default-volumes-to-restic”. This enables the use of restic for taking a backup of all pod volumes as a default.
More info: Velero Docs - Use Tencent Cloud Object Storage as Velero's storage destination.

1 Like

@JellevdK thanks for that, I’ll take a look how we can add that to this page

When the velero is configured with version 1.24+ channel using snap, the velero integration command returns

/usr/local/bin/velero install   \
--secret-file=./credentials-velero \
--provider=aws \
--bucket=velero \
--backup-location-config region=minio-default,s3ForcePathStyle=true,s3Url=http://192.168.2.210:9000 region=minio-default \
--plugins=velero/velero-plugin-for-aws:v1.4.0 \
--use-volume-snapshots=true \
--use-restic=true \
--snapshot-location-config region=minio-default \
--wait

returns
An error occurred: unable to load root certificates: unable to parse bytes as PEM block

The error "An error occurred: unable to load root certificates: unable to parse bytes as PEM block" happens only when we are using microk8s. however this error isn’t seen when using google kubernetes or k3s.

this happens becuase default config file is somehow stripped version of the actual kubeconfig file with below given contents

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:16443
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    token: REDACTED

the
certificate-authority-data: DATA+OMITTED
and
token: REDACTED
dont let the velero work.

however, this approach works

Step 1 - microk8s config > microk8s.config
Step 2 - export KUBECONFIG=./microk8s.config
Step 3 -

velero install     --secret-file=./credentials-velero     --provider=aws     --bucket=velero     --backup-location-config region=minio-default,s3ForcePathStyle=true,s3Url=http://192.168.2.210:9000 region=minio-default     --plugins=velero/velero-plugin-for-aws:v1.4.0     --use-volume-snapshots=true     --use-restic=true     --snapshot-location-config region=minio-default     --wait


CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: attempting to create resource client
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource client
CustomResourceDefinition/resticrepositories.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: attempting to create resource client
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: attempting to create resource client
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: attempting to create resource client
Namespace/velero: already exists, proceeding
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: attempting to create resource client
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: attempting to create resource client
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: attempting to create resource client
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
VolumeSnapshotLocation/default: attempting to create resource
VolumeSnapshotLocation/default: attempting to create resource client
VolumeSnapshotLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
DaemonSet/restic: attempting to create resource
DaemonSet/restic: attempting to create resource client
DaemonSet/restic: created
Waiting for Velero deployment to be ready.
Waiting for Velero restic daemonset to be ready.
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.

For velero v 1.11 it seems that the velero install command should use

velero install --use-node-agent

instead of the currently documented

velero install \
--use-restic \

and therefore the following step becomes


sudo microk8s kubectl -n velero patch daemonset.apps/node-agent --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/volumes/0/hostPath/path", "value":"/var/snap/microk8s/common/var/lib/kubelet/pods"}]'
1 Like

thanks! I will take a look and update

Looks like the minio repo has changed to https://charts.min.io/ .
Source: minio/helm/minio at master · minio/minio · GitHub