This blog posts covers using Kasten by Veeam to create backup policies for data protection, and how to restore your data. This blog post follows on from the two installation guides;
Deploying a PacMan browser game as test application
To provide a demo mission critical application for this blog post, I’ve deployed PacMan into my OpenShift cluster, which is accessible via a web browser to play. You can find the files from this GitHub repo to deploy into your own environment.
This application uses MongoDB to store the scores from the games to give me persistent data stored on a PVC.
You can see all of the PacMan resources below by running:
kubectl get all -n pacman
Creating a Policy to protect your deployment and data
This blog post will take you through the full steps on installing and configuring Kasten, the container based enterprise backup software now owned by Veeam Software
This deployment will be for VMware Tanzu Kubernetes Grid which is running on top of VMware vSphere.
You can read how to create backup policies and restore your data in this blog post.
For the data protection demo, I’ll be using my trusty Pac-Man application that has data persistence using MongoDB.
This sets the storage class to be used for the PV/PVCs to be created for the Kasten install. (In a TKG guest cluster there may not be a default storage class.)
You will be presented an output similar to the below.
NAME: k10
LAST DEPLOYED: Fri Feb 26 01:17:55 2021
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten’s K10 Data Management Platform!
Documentation can be found at https://docs.kasten.io/.
How to access the K10 Dashboard:
The K10 dashboard is not exposed externally. To establish a connection to it use the following
`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`
The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
The K10 Dashboard is accessible via a LoadBalancer. Find the service's EXTERNAL IP using:
`kubectl get svc gateway-ext --namespace kasten-io -o wide`
And use it in following URL
`http://SERVICE_EXTERNAL_IP/k10/#/`
It will take a few minutes for your pods to be running, you can review with the following command;
kubectl get pods -n kasten-io
Next we need to get our LoadBalancer IP address for the External Web Front End, so that we can connect to the Kasten K10 Dashboard.
This creates the correct Security Contexts against the users created by the install. This is needed in OpenShift as the security context stance is higher OOTB than that of a vanilla Kubernetes install.
–set route.enabled=true
This creates a route in OpenShift using the default ingress, so that the Kasten dashboard is accessible externally. This will use the default cluster ID domain name.
–set route.path=”/k10″
This sets the route path for the redirection of the dashboard. Without this, your users will need to go to http://{FQDN}/ and append the path to the end (k10).
If I now look at my pods in my namespace “Kasten-IO” I can see they are being recreated as the deployment artifacts will have been updated with the new configuration including container images.
And finally looking back at my Kasten Dashboard for the cluster information, I can see I am now at the latest version.