As part of my virtual VMUG tour, I submitted a session to the VMUG call for papers covering the subject of Data Protection for Tanzu Kubernetes workloads. (Most of this will apply for any Kubernetes environments).
This was picked up by Erik at the Belgium VMUG for their UserCon in June 2021. After the session the videos remain available on demand for a short time, but there were no plans to upload this for everyone. So thank you to Michael Cade, whom offered to host this session for all on the Cloud Native Data Management – YouTube Channel.
In the below session I cover the following areas;
What kind of data protection do you need?
The open source data protection project from VMware
Tanzu Mission Control
The Kubernetes fleet management platform that utilizes Velero from VMware.
3rd Party Options
A nod to the 3rd party ecosystem that offer enterprise Data Protection and Management software such as;
There is even a quick technical demo in there, with a little technical hiccup I had to style out!
Tanzu Mission Control implements data protection through the inclusion of the Project Velero, this tool is not enabled by default. This blog post will take you through the setup.
Data is stored externally to a AWS location, with volume backups remaining as part of the cluster where you’ve connected TMC.
Currently there is no ability to backup and restore data between Kubernetes clusters managed by TMC.
Create a AWS Data Protection Credential
First we need to create a AWS data protection credential, so that TMC can configure Velero within your cluster to save the data externally to AWS.
If you are looking for supported options for protecting data to other locations, I recommend you either look at deploying Project Velero manually outside of TMC (losing access to the data protection features in the UI) or look at another enterprise service such as Kasten.io.
On the Administration screen, click Accounts, and Create Account Credential.
Select > AWS data protection credential
Set your account name for easy identification and click to generate template and save this file to your machine.
The next steps will require configuration in the AWS console to create resources using CloudFormation so that Project Velero can export data to AWS. Here is the official VMware documentation on this configuration.
In the AWS Console, go to the CloudFormation service
Click to create a new stack
Click “Template is ready” as we will provide our template file from earlier.
Click to upload a template file
Select the file from your machine
Provide a stack name and click next
Ignore all the items on this page and click next
Review your configuration and click finish.
Once you’ve reviewed and clicked create/finish. You will be taken into the Stack itself.
You can click the Events tab and the refresh button to see the progress.
Just a quick blog on how to get the Data Protection feature of Tanzu Mission Control on Red Hat OpenShift. By default you will find that once the data protection feature is enabled, the pods for Restic component of Velero error.
Enable the Data Protection Feature on your Openshift cluster
You will see the UI change to show it’s enabling the feature.
You will see the Velero namespace created in your cluster.
However the “Data Protection is being enabled” message in the TMC UI will continue to show without user intervention. If you show the pods for the Velero namespace you will see they error.
This is because OpenShift has a higher security context out of the box for containers than a vanilla Kubernetes environment.
The steps to resolve this are the same for a native install of the Project Velero opensource install to your cluster.
First we need to add the velero service account to the privileged SCC.