Tag Archives: TKG

VMware Tanzu Header

Tanzu Mission Control – Deploying TKG Clusters to AWS

This blog post will cover a technical walk-through on using Tanzu Mission Control to deploy Tanzu Kubernetes clusters to AWS.

The follow up blog posts in this series are:

Tanzu Mission Control
- Getting Started with TMC
- - What is Tanzu Mission Control?
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- - Where can I demo/test/trial this myself?
- Cluster Inspections
- - What Inspections are available 
- - Performing Inspections 
- - Viewing Inspections
- Workspaces and Policies
- - Creating a workspace 
- - Creating a managed Namespace 
- - Policy Driven Cluster Management 
- - Creating Policies
- Using the Data Protection feature for backups and restores
- - Data Protection Overview 
- - Create a AWS Data Protection Credential 
- - Enable Data Protection on a Cluster 
- - Running a backup manually or via an automatic schedule 
- - Restoring your data
Using the AWS Hosted Management Cluster

In this example, we will use the default provided AWS Hosted Management cluster.

Alternatively, you can use the Tanzu CLI to provision a TKG Management cluster into AWS and attach this to Tanzu Mission Control.

Currently it is not supported to have a Management Cluster manage clusters across platforms.

  • I.e. Management Cluster in AWS that manages workload clusters in Azure.

To get started:

  1. Go to Administration
  2. Click the Management Clusters Tab
  3. Click on the “aws-hosted” cluster object name

TMC - Administration - Management Clusters

Create a provisioner

The default tab when selecting the “aws-hosted” management cluster object is the provisioner tab.

  • Click create provisioner

TMC - aws-hosted - provisioners - create provisioner

  • Provide a name for the provisioner
  • Click confirm

TMC - aws-hosted - provisioners - create provisioner - provide name

You will be taken back to your provisioner object which is created. Using the radio button to select the object will allow you to delete it. No other action is available.

TMC - aws-hosted - provisioners - provisioner created

Create the AWS account
  1. Click on accounts tab
  2. Click the “Create Account Credential” Button

TMC - aws-hosted - accounts - create account credential Continue reading Tanzu Mission Control – Deploying TKG Clusters to AWS

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid Workload Cluster to Microsoft Azure

Following on from my previous blog post;

We will now continue and deploy our first Workload (Guest) Cluster into Azure for us by our developers to deploy their applications into.

For this technical walkthrough, I am assuming you have followed the previous blog post and have the Tanzu CLI and Kubectl CLI installed, and a working management cluster.

As a reminder of the terminology;

  • Tanzu Kubernetes Workload Clusters

Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters canrun different Kubernetes versions as required. These clusters use Antrea networking by default.

These types of clusters are also referred to as “workload” clusters, or “guest” clusters, with the latter typically referring to the Tanzu Kubernetes Grid Service running in vSphere.

Deploying a Guest Cluster

Login to your Tanzu environment Management Cluster with the following:

Tanzu login

Deploy Management cluster to Azure - Tanzu Login

First we need to create a cluster configuration YAML file. You can find a template here for Azure, or view the full available variables here.

Alternatively, we can use the existing YAML file in our ~/.tanzu/tkg/clusterconfigs folder used for the management cluster deployment and change a few settings to make it ready for our workload guest cluster.

This was my preferred method as it contained all my Azure settings already.

#Find existing cluster config file 

ls -lh ~/.tanzu/tkg/clusterconfigs/

#Copy file to a new config

cp ~/.tanzu/tkg/clusterconfigs/6x4hl1wy8o.yaml tanzu-veducate-guest-azure.yaml

# Edit file = CLUSTER_NAME
# Workload cluster names must be 42 characters or less.

Deploy Tanzu Kubernetes Guest cluster to Azure - create cluster configuration yaml file Continue reading Deploying Tanzu Kubernetes Grid Workload Cluster to Microsoft Azure

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

In this blog post, we will detail a full technical run through on how to deploy Tanzu Kubernetes Grid (TKG) into Microsoft Azure,

This will be using the new Tanzu CLI (version 1.3) (Previously TKG CLI) released in March 2021, to deploy  both a new Management Cluster and Guest Cluster.

Tanzu Kubernetes Grid Cluster Types

TKG has two types of clusters, for the full information of TKG Concepts, please read this post.

  • Management Cluster

This is the first architectural components to be deployed for creating a TKG instance. The management cluster is a dedicated cluster for management and operation of your whole TKG instance infrastructure. A management cluster will have Antrea networking enabled by default. This runs cluster API to create the additional clusters for your workloads to run, as well as the shared and in-cluster services for all clusters within the instance to use.

It is not recommended that the management cluster be used as a general-purpose compute environment for your application workloads.

  • Tanzu Kubernetes (Guest) Clusters

Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters can run different Kubernetes versions as required. These clusters use Antrea networking by default.

These clusters are referred to as Workload Clusters when working with the Tanzu CLI.

I sometimes use the term “Guest” for these clusters, as a cross-over with the vSphere with Tanzu architecture, which has similar concepts as above however uses the terms “Supervisor Cluster” and “Guest Cluster”.

Pre-Requisites

For this blog post, I’ll be deploying everything from my local Mac OS X machine. You will need the following:

  • Docker installed with Kubernetes enabled
    • For Windows and macOS Docker clients, you must allocate at least 6 GB of memory in Docker Desktop to accommodate the kind container. See Settings for Docker Desktop in the kind documentation.
  • Install the Tanzu CLI and the Kubectl tool > Instructions here.
    • If you have used the TKG CLI before, then this is now deprecated.
    • You can find a full command line reference for Tanzu CLI and a comparison of the TKG CLI commands in this documentation link.
  • Install the Azure CLI.
  •  Register a Tanzu Kubernetes Grid App on Azure
    • The full details in the VMware docs for deploying TKG to Azure can be found here.
Login to the Azure CLI and accept the VM EULA

Before we get started, we need to log into the Azure CLI and accept the EULA for the images used for TKG in Azure. These images are updated with each release of the Tanzu CLI (TKG CLI).

az login

az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription {subscription_id}
az loginaz vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription
Deploying a Management Cluster using the UI

From your terminal, run the following command:

tanzu management-cluster create --ui

tanzu management-cluster create --ui Continue reading Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid

This blog post will take you through the full steps on installing and configuring Kasten, the container based enterprise backup software now owned by Veeam Software

This deployment will be for VMware Tanzu Kubernetes Grid which is running on top of VMware vSphere.

You can read how to create backup policies and restore your data in this blog post.

For the data protection demo, I’ll be using my trusty Pac-Man application that has data persistence using MongoDB.

Installing Kasten on Tanzu Kubernetes Grid

In this guide, I am going to use Helm, you can learn how to install it here.

Add the Kasten Helm charts repo.

helm repo add kasten https://charts.kasten.io/

Create a Kubernetes namespace called “kasten-io”

kubectl create namespace kasten-io

kubectl create namespace kasten-io

Next we are going to use Helm to install the Kasten software into our Tanzu Kubernetes Grid cluster.

helm install k10 kasten/k10 --namespace=kasten-io \
--set externalGateway.create=true \
--set auth.tokenAuth.enabled=true \
--set global.persistence.storageClass=<storage-class-name>

Breaking down the command arguments;

  • –set externalGateway.crete=true
    • This creates an external service to use ServiceType=LoadBalancer to allow external access to the Kasten K10 Dashboard outside of your cluster.
  • –set auth.tokenAuth.enabled=true
  • –set global.persistence.storageClass=<storage-class-name>
    • This sets the storage class to be used for the PV/PVCs to be created for the Kasten install. (In a TKG guest cluster there may not be a default storage class.)

You will be presented an output similar to the below.

NAME: k10
LAST DEPLOYED: Fri Feb 26 01:17:55 2021
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten’s K10 Data Management Platform!

Documentation can be found at https://docs.kasten.io/.

How to access the K10 Dashboard:

The K10 dashboard is not exposed externally. To establish a connection to it use the following

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`


The K10 Dashboard is accessible via a LoadBalancer. Find the service's EXTERNAL IP using:
`kubectl get svc gateway-ext --namespace kasten-io -o wide`
And use it in following URL
`http://SERVICE_EXTERNAL_IP/k10/#/`

It will take a few minutes for your pods to be running, you can review with the following command;

kubectl get pods -n kasten-io

 kubectl get pods -n kasten-io

Next we need to get our LoadBalancer IP address for the External Web Front End, so that we can connect to the Kasten K10 Dashboard.

kubectl get svc -n kasten-io

Continue reading Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid