Tag Archives: backup

kasten by veeam header

How to backup and restore your container workloads using Kasten by Veeam

This blog posts covers using Kasten by Veeam to create backup policies for data protection, and how to restore your data. This blog post follows on from the two installation guides;

Deploying a PacMan browser game as test application

To provide a demo mission critical application for this blog post, I’ve deployed PacMan into my OpenShift cluster, which is accessible via a web browser to play. You can find the files from this GitHub repo to deploy into your own environment.

pacman

This application uses MongoDB to store the scores from the games to give me persistent data stored on a PVC.

pacman high scores

You can see all of the PacMan resources below by running:

kubectl get all -n pacman

kubectl get all -n pacman

Creating a Policy to protect your deployment and data

Log into your Kasten Dashboard.

If you have not yet deployed and configured Kasten, please see these earlier blog posts.

- Installing Kasten for Red Hat OpenShift
- Installing Kasten for VMware Tanzu Kubernetes

On the Kasten dashboard, click the Policy tile (or new policy link within the tile).

Kasten Dashboard create policy Continue reading How to backup and restore your container workloads using Kasten by Veeam

Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid

This blog post will take you through the full steps on installing and configuring Kasten, the container based enterprise backup software now owned by Veeam Software

This deployment will be for VMware Tanzu Kubernetes Grid which is running on top of VMware vSphere.

You can read how to create backup policies and restore your data in this blog post.

For the data protection demo, I’ll be using my trusty Pac-Man application that has data persistence using MongoDB.

Installing Kasten on Tanzu Kubernetes Grid

In this guide, I am going to use Helm, you can learn how to install it here.

Add the Kasten Helm charts repo.

helm repo add kasten https://charts.kasten.io/

Create a Kubernetes namespace called “kasten-io”

kubectl create namespace kasten-io

kubectl create namespace kasten-io

Next we are going to use Helm to install the Kasten software into our Tanzu Kubernetes Grid cluster.

helm install k10 kasten/k10 --namespace=kasten-io \
--set externalGateway.create=true \
--set auth.tokenAuth.enabled=true \
--set global.persistence.storageClass=<storage-class-name>

Breaking down the command arguments;

  • –set externalGateway.crete=true
    • This creates an external service to use ServiceType=LoadBalancer to allow external access to the Kasten K10 Dashboard outside of your cluster.
  • –set auth.tokenAuth.enabled=true
  • –set global.persistence.storageClass=<storage-class-name>
    • This sets the storage class to be used for the PV/PVCs to be created for the Kasten install. (In a TKG guest cluster there may not be a default storage class.)

You will be presented an output similar to the below.

NAME: k10
LAST DEPLOYED: Fri Feb 26 01:17:55 2021
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten’s K10 Data Management Platform!

Documentation can be found at https://docs.kasten.io/.

How to access the K10 Dashboard:

The K10 dashboard is not exposed externally. To establish a connection to it use the following

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`


The K10 Dashboard is accessible via a LoadBalancer. Find the service's EXTERNAL IP using:
`kubectl get svc gateway-ext --namespace kasten-io -o wide`
And use it in following URL
`http://SERVICE_EXTERNAL_IP/k10/#/`

It will take a few minutes for your pods to be running, you can review with the following command;

kubectl get pods -n kasten-io

 kubectl get pods -n kasten-io

Next we need to get our LoadBalancer IP address for the External Web Front End, so that we can connect to the Kasten K10 Dashboard.

kubectl get svc -n kasten-io

Continue reading Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid

How to install and configure Kasten to protect container workloads on Red Hat OpenShift and VMware vSphere

In this blog post I’m going to cover deploying and configuring Kasten, the container based enterprise backup software now owned by Veeam Software.

This deployment will be inside my Red Hat OpenShift Environment which is running on top of VMware vSphere.

I’ll be protecting a cool gaming application that has data persistence using MongoDB.

Installing Kasten on Red Hat OpenShift

In this guide, I am going to use Helm, you can learn how to install it here.

Create a OpenShift project (Kubernetes namespace) called “kasten-io”

oc new-project kasten-io

oc new project kasten-io

Next we are going to use Helm to install the Kasten software into our OpenShift cluster.

helm install k10 kasten/k10 --namespace=kasten-io --set scc.create=true --set route.enabled=true --set route.path="/k10" --set auth.tokenAuth.enabled=true

Breaking down the command arguments;

  • –set scc.create=true
    • This creates the correct Security Contexts against the users created by the install. This is needed in OpenShift as the security context stance is higher OOTB than that of a vanilla Kubernetes install.
  • –set route.enabled=true
    • This creates a route in OpenShift using the default ingress, so that the Kasten dashboard is accessible externally. This will use the default cluster ID domain name.
  • –set route.path=”/k10″
    • This sets the route path for the redirection of the dashboard. Without this, your users will need to go to http://{FQDN}/ and append the path to the end (k10).
  • –set auth.tokenAuth.enabled=true

helm install k10 kasten kasten-io Continue reading How to install and configure Kasten to protect container workloads on Red Hat OpenShift and VMware vSphere

VMware Tanzu Header

VMware Tanzu Mission Control – Using the Data Protection feature for backups and restores

In this blog post we will cover the following topics

- Data Protection Overview
- Create a AWS Data Protection Credential
- Enable Data Protection on a Cluster
- Running a backup manually or via an automatic schedule
- Restoring your data

The follow up blog posts are;

- Tanzu Mission Control
- - Getting Started with TMC
- - - What is Tanzu Mission Control?
- - - Creating a Cluster Group
- - - Attaching a cluster to Tanzu Mission Control
- - - Viewing your Cluster Objects
- - -Where can I demo/test/trial this myself?
- - Cluster Inspections
- - - What Inspections are available 
- - - Performing Inspections 
- - - Viewing Inspections
- - Workspaces and Policies
- - - Creating a workspace 
- - - Creating a managed Namespace 
- - - Policy Driven Cluster Management 
- - - Creating Policies
TMC Data Protection Overview

Tanzu Mission Control implements data protection through the inclusion of the Project Velero,  this tool is not enabled by default. This blog post will take you through the setup.

Data is stored externally to a AWS location, with volume backups remaining as part of the cluster where you’ve connected TMC.

Currently there is no ability to backup and restore data between Kubernetes clusters managed by TMC.

Create a AWS Data Protection Credential

First we need to create a AWS data protection credential, so that TMC can configure Velero within your cluster to save the data externally to AWS.

If you are looking for supported options for protecting data to other locations, I recommend you either look at deploying Project Velero manually outside of TMC (losing access to the data protection features in the UI) or look at another enterprise service such as Kasten.io.

  • On the Administration screen, click Accounts, and Create Account Credential.
  • Select > AWS data protection credential

  • Set your account name for easy identification and click to generate template and save this file to your machine.

The next steps will require configuration in the AWS console to create resources using CloudFormation so that Project Velero can export data to AWS. Here is the official VMware documentation on this configuration.

  • In the AWS Console, go to the CloudFormation service

  • Click to create a new stack
  1. Click “Template is ready” as we will provide our template file from earlier.
  2. Click to upload a template file
  3. Select the file from your machine
  4. Click next

  • Provide a stack name and click next

  • Ignore all the items on this page and click next
  • Review your configuration and click finish.

  • Once you’ve reviewed and clicked create/finish. You will be taken into the Stack itself.
  • You can click the Events tab and the refresh button to see the progress.

Continue reading VMware Tanzu Mission Control – Using the Data Protection feature for backups and restores

How to backup vRealize Automation 8.x using Veeam

In this blog post I am going to dissect backing up vRealize Automation 8.x using Veeam Backup and Replication.

- Understanding the backup methods
- Performing an online backup
- Performing an offline backup

Understanding the Backup Methods

Reading the VMware documentation around this subject can be somewhat confusing at times. And if you pay attention, there are subtle changes between the documents as well. Lets break this down.

  • vRealize Automation 8.0
    • As part of the backup job, you need to run a script to stop the services
    • This is known as an offline backup
    • Depending on your backup software, you can either do this by running a script located on the vRealize Automation appliance or by triggering using the pre-freeze/post-freeze scripts when a snapshot is taken of the VM.
    • The snapshot must not include the virtual machines memory.
    • If you environment is a cluster, you only need to run the script on a single node.
    • All nodes in the cluster must be backed up at the same time.
  • vRealize Automation 8.0.1 and 8.1 (and higher)
    • It is supported to run an online backup
      • No script is needed to shut down the services
    • Snapshot taken as part of the backup must quiesce the virtual machine.
    • The snapshot must not include the virtual machines memory.
    • It is recommended to run the script to stop all services and perform an offline backup.
      • You may also find your backup runs faster, as the virtual machine will become less busy.

Performing an Online Backup

Let’s start with the easier of the two options. Again, this will be supported for vRealize Automation 8.0.1 and higher. Continue reading How to backup vRealize Automation 8.x using Veeam