Category Archives: VMware

vRealize Operations Tanzu Mission Control Header

vRealize Operations integration with Tanzu Mission Control for auto cluster discovery

A while ago I wrote about the vRealize Operations Kubernetes Management pack which works for all CNCF conformant Kubernetes platforms.

One of the best features of this management pack is the Tanzu Mission Control (TMC) integration it offers with vRealize Operations (vROPs).

This means when you use TMC to provision Tanzu Kubernetes Grid (TKG) clusters, currently on AWS or on vSphere, they will be automatically registered within vROPs as well.

Install the Management Pack
  1. Download the management pack pak file.
  2. Within vROPs go to Administration
  3. Click on Repository
  4. Scroll to the bottom of the page, and select “Add/Upgrade”
  5. Select the pak file for installation and follow the wizard.
Create a CSP API Token

For the vROPs management pack adapter to be able to communicate with TMC, we need an API token.

  1. Log into https://console.cloud.vmware.com
  2. Change to correct organisation that contains your TMC instance
  3. Click your name in the top right hand corner and select “My Account”vROps TMC Integration - creating a CSP Token - Select my account
  4. Select the “API Tokens” tab, and then “Generate a new API Token” button.vROps TMC Integration - creating a CSP Token - API Tokens
  5. Set your API Token name, expiry, and access control as required. Then click the generate button. vROps TMC Integration - creating a CSP Token - Generate a new api token
  6. You will be shown a dialog box with your generated token. Save this in a safe space we will use it later on. vROps TMC Integration - creating a CSP Token - Token Generated
Connect vRealize Operations management pack adapter to Tanzu Mission Control
  1. In vROPs UI go to Administration > Under Solutions, choose “Other Accounts” and click the “Add account” button. vROps TMC Integration - Add Account in vROPs
  2. From the account type list, choose Tanzu Mission Control. vROps TMC Integration - Add Account in vROPs - Account Type Tanzu Mission Control
  3. Fill out the necessary details on the New Account screen.
    1. For the credential click the + symbol, add in a name for the credential, and the CSP token you created earlier.
    2. Select your newly created credential.
  4. Select the validate button.vROps TMC Integration - Add Account in vROPs - New Account
  5. Hopefully you get a successful message. vROps TMC Integration - Add Account in vROPs - New Account - Test Connection Successful
  6. You will see the account object in the Other Accounts view. vROps TMC Integration - Add Account in vROPs - New Account - Newly created account
Auto-Discovering Tanzu Kubernetes Grid Clusters

Now you have your account added, whenever you provision a new cluster using Tanzu Mission Control, cAdvisor will be configured in the Kubernetes cluster and a Kubernetes account type will be created in vROps automatically for you.

Below I’ve created a cluster in AWS, and we can see the object has been created in vROPs.

vROps TMC Integration - Provisioned cluster auto discovered

And finally, here is my cluster showing in the one of the Kubernetes Dashboards. vROps TMC Integration - Kubernetes Dashboard

This is a simple to implement feature but can make a massive difference in your ability to monitor your TKG clusters from the infrastructure view that vROPs provides. As your users create clusters via TMC, they don’t need to interact with the monitoring platform to ensure visibility.

Regards

 

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

In this blog post, we will detail a full technical run through on how to deploy Tanzu Kubernetes Grid (TKG) into Microsoft Azure,

This will be using the new Tanzu CLI (version 1.3) (Previously TKG CLI) released in March 2021, to deploy  both a new Management Cluster and Guest Cluster.

Tanzu Kubernetes Grid Cluster Types

TKG has two types of clusters, for the full information of TKG Concepts, please read this post.

  • Management Cluster

This is the first architectural components to be deployed for creating a TKG instance. The management cluster is a dedicated cluster for management and operation of your whole TKG instance infrastructure. A management cluster will have Antrea networking enabled by default. This runs cluster API to create the additional clusters for your workloads to run, as well as the shared and in-cluster services for all clusters within the instance to use.

It is not recommended that the management cluster be used as a general-purpose compute environment for your application workloads.

  • Tanzu Kubernetes (Guest) Clusters

Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters can run different Kubernetes versions as required. These clusters use Antrea networking by default.

These clusters are referred to as Workload Clusters when working with the Tanzu CLI.

I sometimes use the term “Guest” for these clusters, as a cross-over with the vSphere with Tanzu architecture, which has similar concepts as above however uses the terms “Supervisor Cluster” and “Guest Cluster”.

Pre-Requisites

For this blog post, I’ll be deploying everything from my local Mac OS X machine. You will need the following:

  • Docker installed with Kubernetes enabled
    • For Windows and macOS Docker clients, you must allocate at least 6 GB of memory in Docker Desktop to accommodate the kind container. See Settings for Docker Desktop in the kind documentation.
  • Install the Tanzu CLI and the Kubectl tool > Instructions here.
    • If you have used the TKG CLI before, then this is now deprecated.
    • You can find a full command line reference for Tanzu CLI and a comparison of the TKG CLI commands in this documentation link.
  • Install the Azure CLI.
  •  Register a Tanzu Kubernetes Grid App on Azure
    • The full details in the VMware docs for deploying TKG to Azure can be found here.
Login to the Azure CLI and accept the VM EULA

Before we get started, we need to log into the Azure CLI and accept the EULA for the images used for TKG in Azure. These images are updated with each release of the Tanzu CLI (TKG CLI).

az login

az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription {subscription_id}
az loginaz vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription
Deploying a Management Cluster using the UI

From your terminal, run the following command:

tanzu management-cluster create --ui

tanzu management-cluster create --ui Continue reading Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

Kasten Tanzu Header

VMUG Recording – Protecting your Tanzu Kubernetes Workload with Kasten by Veeam

Below is the recording from my London VMUG session with Michael Cade.

  • Title: Protecting your Tanzu Kubernetes Workload with Kasten by Veeam
  • Recorded: 4th February 2021
  • Abstract:
    • This technical demo led session will take you through how to deploy Kasten in your Tanzu Kubernetes environment to protect your container workloads.

Supporting blog posts;

Regards

 

VMware Cloud Foundation VCF Header

VCF – SDDC Manager – How to delete bundles

Just a quick blog post on deleting unnecessary or unneeded bundles from VCF – SDDC Manager.

There is two parts to this.

  • Getting your Bundle ID you want to delete from the API
  • Deleting the Bundle using a script on the SDDC Manager appliance.

In your SDDC Manager:

  1. Click Development Center
  2. Click API Explorer
  3. Expand “APIs for managing bundles”
  4. Expand the first “GET” command

VCF SDDC Manager - API - Get Bundles

  • Click Execute, no need to fill anything in

VCF SDDC Manager - API - Get Bundles - Execute

  • Download or Copy the response output.

VCF SDDC Manager - API - Get Bundles - Response

  • Find your Bundle ID within your output, you need to look for the top level ID of the JSON block, and ensure that this bundle says successfully downloaded.

VCF SDDC Manager API Get Bundles Response JSON Find Bundle ID

SSH to your SDDC Manager and elevate to root.

# su
{provide password to elevate to root}
# /opt/vmware/vcf/lcm/lcm-app/bin/bundle_cleanup.py {Bundle_id}

Example below
# /opt/vmware/vcf/lcm/lcm-app/bin/bundle_cleanup.py f004390e-26be-4690-9d7b-d447860e3169

VCF SDDC Manager bundle cleanup.py script

You will see the following output when the script has run.

-----------------------------------------------------
LOG FILE : /var/log/vmware/vcf/lcm/bundle_cleanup.log
-----------------------------------------------------
2021-03-08 12:18:31,809 [INFO] root: Performing cleanup for bundle with IDs : [' f004390e-26be-4690-9d7b-d447860e3169']
2021-03-08 12:18:31,809 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select count(*) from upgrade where upgrade_status in ('INPROGRES S','CANCELLING');"
2021-03-08 12:18:31,848 [INFO] root: b' 0\n'
2021-03-08 12:18:31,848 [INFO] root: b'\n'
2021-03-08 12:18:31,848 [INFO] root: RC: 0
2021-03-08 12:18:31,849 [INFO] root: Out: 0

2021-03-08 12:18:31,849 [INFO] root: Stopping LCM service.
2021-03-08 12:18:31,849 [INFO] root: Execute cmd: systemctl stop lcm
2021-03-08 12:18:32,290 [INFO] root: RC: 0
2021-03-08 12:18:32,290 [INFO] root: Out:
2021-03-08 12:18:32,291 [INFO] root: Removing LCM NFS mount.
2021-03-08 12:18:32,291 [INFO] root: Execute cmd: rm -rf /nfs/vmware/vcf/nfs-mou nt/bundle/f004390e-26be-4690-9d7b-d447860e3169
2021-03-08 12:18:32,683 [INFO] root: RC: 0
2021-03-08 12:18:32,684 [INFO] root: Out:
2021-03-08 12:18:32,684 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select upload_id from bundle_upload where bundle_id = 'f004390e- 26be-4690-9d7b-d447860e3169';"
2021-03-08 12:18:32,704 [INFO] root: b'\n'
2021-03-08 12:18:32,705 [INFO] root: RC: 0
2021-03-08 12:18:32,705 [INFO] root: Out:

2021-03-08 12:18:32,705 [INFO] root: Bundle with ID : f004390e-26be-4690-9d7b-d4 47860e3169 not found in bundle upload table
2021-03-08 12:18:32,706 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select download_id from bundledownload_by_id where bundle_id = ' f004390e-26be-4690-9d7b-d447860e3169';"
2021-03-08 12:18:32,724 [INFO] root: b' 0fb2e30e-d991-4b63-8686-42fab98a1c9e\n'
2021-03-08 12:18:32,724 [INFO] root: b'\n'
2021-03-08 12:18:32,725 [INFO] root: RC: 0
2021-03-08 12:18:32,725 [INFO] root: Out: 0fb2e30e-d991-4b63-8686-42fab98a1c9e

2021-03-08 12:18:32,725 [INFO] root: Execute cmd: curl -s -X DELETE localhost/ta sks/registrations/0fb2e30e-d991-4b63-8686-42fab98a1c9e
2021-03-08 12:18:32,830 [INFO] root: RC: 0
2021-03-08 12:18:32,830 [INFO] root: Out:
2021-03-08 12:18:32,830 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select upgrade_id from upgrade where bundle_id = 'f004390e-26be- 4690-9d7b-d447860e3169';"
2021-03-08 12:18:32,852 [INFO] root: b'\n'
2021-03-08 12:18:32,853 [INFO] root: RC: 0
2021-03-08 12:18:32,853 [INFO] root: Out:

2021-03-08 12:18:32,853 [INFO] root: Bundle with ID : f004390e-26be-4690-9d7b-d4 47860e3169 not found in upgrade table
2021-03-08 12:18:32,854 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select upgrade_id from upgrade where bundle_id = 'f004390e-26be- 4690-9d7b-d447860e3169';"
2021-03-08 12:18:32,873 [INFO] root: b'\n'
2021-03-08 12:18:32,874 [INFO] root: RC: 0
2021-03-08 12:18:32,874 [INFO] root: Out:

2021-03-08 12:18:32,874 [INFO] root: Bundle with ID : f004390e-26be-4690-9d7b-d4 47860e3169 not found in upgrade table
2021-03-08 12:18:32,875 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select count(*) from bundle where bundle_id = 'f004390e-26be-469 0-9d7b-d447860e3169';"
2021-03-08 12:18:32,894 [INFO] root: b' 1\n'
2021-03-08 12:18:32,895 [INFO] root: b'\n'
2021-03-08 12:18:32,895 [INFO] root: RC: 0
2021-03-08 12:18:32,895 [INFO] root: Out: 1

2021-03-08 12:18:32,896 [INFO] root: Deleting bundle & upgrade info for bundle I D : f004390e-26be-4690-9d7b-d447860e3169
2021-03-08 12:18:32,896 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -c "delete from bundle where bundle_id = 'f004390e-26be-4690-9d7b-d44 7860e3169';"
2021-03-08 12:18:32,923 [INFO] root: b'DELETE 1\n'
2021-03-08 12:18:32,924 [INFO] root: RC: 0
2021-03-08 12:18:32,924 [INFO] root: Out: DELETE 1

2021-03-08 12:18:32,924 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select count(*) from image where bundle_id = 'f004390e-26be-4690 -9d7b-d447860e3169';"
2021-03-08 12:18:32,943 [INFO] root: b' 1\n'
2021-03-08 12:18:32,943 [INFO] root: b'\n'
2021-03-08 12:18:32,943 [INFO] root: RC: 0
2021-03-08 12:18:32,944 [INFO] root: Out: 1

2021-03-08 12:18:32,944 [INFO] root: Deleting bundle f004390e-26be-4690-9d7b-d44 7860e3169 in image table
2021-03-08 12:18:32,944 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -c "delete from image where bundle_id = 'f004390e-26be-4690-9d7b-d447 860e3169';"
2021-03-08 12:18:32,967 [INFO] root: b'DELETE 1\n'
2021-03-08 12:18:32,967 [INFO] root: RC: 0
2021-03-08 12:18:32,967 [INFO] root: Out: DELETE 1

2021-03-08 12:18:32,968 [INFO] root: Execute cmd: psql --host=localhost -U postg res -d lcm -tc "select count(*) from partner_bundle_metadata where bundle_id = ' f004390e-26be-4690-9d7b-d447860e3169';"
2021-03-08 12:18:32,990 [INFO] root: b' 0\n'
2021-03-08 12:18:32,990 [INFO] root: b'\n'
2021-03-08 12:18:32,990 [INFO] root: RC: 0
2021-03-08 12:18:32,990 [INFO] root: Out: 0

2021-03-08 12:18:32,990 [INFO] root: Bundle with ID : f004390e-26be-4690-9d7b-d4 47860e3169 not found in partner_bundle_metadata table
2021-03-08 12:18:32,991 [INFO] root: Starting LCM service.
2021-03-08 12:18:32,991 [INFO] root: Execute cmd: systemctl start lcm
2021-03-08 12:18:33,135 [INFO] root: RC: 0
2021-03-08 12:18:33,136 [INFO] root: Out:

Going back into your SDDC Manager UI, and clicking the Bundle Management page, you will see your bundle has now been deleted.

It will take a few minutes for the Bundle services to restart, and you may see the message “Depot still initializing”.

Regards

 

Kasten Tanzu Header

Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid

This blog post will take you through the full steps on installing and configuring Kasten, the container based enterprise backup software now owned by Veeam Software

This deployment will be for VMware Tanzu Kubernetes Grid which is running on top of VMware vSphere.

You can read how to create backup policies and restore your data in this blog post.

For the data protection demo, I’ll be using my trusty Pac-Man application that has data persistence using MongoDB.

Installing Kasten on Tanzu Kubernetes Grid

In this guide, I am going to use Helm, you can learn how to install it here.

Add the Kasten Helm charts repo.

helm repo add kasten https://charts.kasten.io/

Create a Kubernetes namespace called “kasten-io”

kubectl create namespace kasten-io

kubectl create namespace kasten-io

Next we are going to use Helm to install the Kasten software into our Tanzu Kubernetes Grid cluster.

helm install k10 kasten/k10 --namespace=kasten-io \
--set externalGateway.create=true \
--set auth.tokenAuth.enabled=true \
--set global.persistence.storageClass=<storage-class-name>

Breaking down the command arguments;

  • –set externalGateway.crete=true
    • This creates an external service to use ServiceType=LoadBalancer to allow external access to the Kasten K10 Dashboard outside of your cluster.
  • –set auth.tokenAuth.enabled=true
  • –set global.persistence.storageClass=<storage-class-name>
    • This sets the storage class to be used for the PV/PVCs to be created for the Kasten install. (In a TKG guest cluster there may not be a default storage class.)

You will be presented an output similar to the below.

NAME: k10
LAST DEPLOYED: Fri Feb 26 01:17:55 2021
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten’s K10 Data Management Platform!

Documentation can be found at https://docs.kasten.io/.

How to access the K10 Dashboard:

The K10 dashboard is not exposed externally. To establish a connection to it use the following

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`


The K10 Dashboard is accessible via a LoadBalancer. Find the service's EXTERNAL IP using:
`kubectl get svc gateway-ext --namespace kasten-io -o wide`
And use it in following URL
`http://SERVICE_EXTERNAL_IP/k10/#/`

It will take a few minutes for your pods to be running, you can review with the following command;

kubectl get pods -n kasten-io

 kubectl get pods -n kasten-io

Next we need to get our LoadBalancer IP address for the External Web Front End, so that we can connect to the Kasten K10 Dashboard.

kubectl get svc -n kasten-io

Continue reading Installing and configuring Kasten to protect container workloads on VMware Tanzu Kubernetes Grid