Category Archives: VMware

Tanzu Mission Control Header

Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

The Issue

A while ago I was chatting to Michael Cade, and we pondered the question “How do we ensure Kasten is protecting a newly deployed application in our Kubernetes environment”.

We chatted about how one of the best ways to make your Kasten protection policy flexible is by using metadata labels.

We came up with the simple idea: “What if something forces a known label on the metadata of any applications deployed by our developers in the future?”

This blog post covers this use case using Tanzu Mission Control with custom policies.

The Solution

One of the products we can use to enforce labels on a Kubernetes resource is Open Policy Agent Gatekeeper. Which is an external admission controller which allows you to create policies for the admission of resource creation/changes/updates based on a criteria.

  • OPA policies are expressed in a high-level declarative language called Rego. (Pronounced “ray-go”.)

Tanzu Mission Control, the fleet management SaaS tool for managing your Kubernetes platforms, provides you the ability to create policies of various types to manage the operation and security posture of your Kubernetes clusters and other organizational objects, implemented by using the OPA Gatekeeper.

Implementing The Solution

For this solution “art of the possible” blog post, we are going to keep it really simple, and implement a policy which covers the following: Continue reading Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

Tanzu Mission Control Header

Postman Collection for Tanzu Mission Control REST APIs

Whilst working with vRA to deploy various Kubernetes clusters and then register them with Tanzu Mission Control (TMC), I decided to use Postman (a great API Explorer tool) to catalogue my work and build out several use cases.

I’ve posted this here:

This collection was created from the TMC API Documentation. This API is version “v1alpha1” and should be treated as such.

So far, I’ve created the following areas/use cases:

  • Login
  • Audits
  • Attach Cluster
  • List Cluster/s
  • Cluster Group/s management
  • Data Protection management
  • Cluster Inspections
Variables inside the collection

I have opted to create the variables inside the collection itself, rather than a separate environment.

Some of the API requests have tests associated, which will populate your variables for you.

You will need your TMC URL and a VMware CSP API Token as your starting point.

TMC API - Postman Collection - Collection Variables

Documentation

Where the requests require some changes in the body that is best not to have as a variable, such as naming a backup, I’ve also tried to add information on the documentation.

TMC API - Postman Collection - Collection Documentation

Getting Started

Under the Login folder, run “Get Access Token”, which will connect to your TMC URL and use the CSP Refresh Token to generate an Access Token, this access token will be committed to a variable called “accessToken” for use with the other requests.

TMC API - Postman Collection - Login

You will also probably want to run the “Get Organisation ID” as some of the requests require your Org ID, so this will commit it to a variable. This is gathered by looking at the details for your given CSP Token.

Attach Cluster

If you are running the API to attach a new cluster. Then you will want to run the second request “Get TMC Agent Installer information” which will give you the Installer Link to run in your Kubernetes environment. This data will be written to a variable.

List Clusters

For most of the request that List information, you can use the query “?searchScope.name=” with the API call to filter for necessary objects, or you can use the wildcard value *. I’ve added most of the search filters and value formatters to the requests.

To get the full details for a particular named cluster, I have written the queries for specified clusters, this requires you to provide the management cluster and provisioner of that specified cluster in the query. Essentially it returns the same information as the “Get Clusters List” combined with the SearchScope filter.

Wrap-up

So, I won’t describe every set of requests I’ve created. I’ve tried to create these with the bare minimum information you need especially for the POST methods.

If you want to explore the APIs more, you can download an import the Swagger/Open API spec from VMware yourself and import into Postman, but personally I found this hard to work with, and the example bodies give you everything including the responses you won’t need for a POST.

If you’d like to contribute, please do this via the GitHub link!

Looking for more resources around TMC? Then you can check out my other blogs!

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

Regards

Tanzu Mission Control Header

Tanzu Mission Control – TKG Management support and provisioning new clusters

In this blog post, I am going to cover the new support for Tanzu Kubernetes Grid Management clusters on both VMware Cloud on AWS (VMC) and Azure VMware Solution (AVS). This functionality also allows the provisioning of new Tanzu Kubernetes workload clusters (TKC) to the relevant platform, provisioned by the lifecycle management controls within Tanzu Mission Control.

Below are the other blog posts I’ve wrote covering Tanzu Mission Control.

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application
Release Notes

Below are the relevant release notes for the features I’ll cover. In this blog post, I’ll just be showing screenshots for a VMC environment, however the same applies to AVS as well.

What's New May 26, 2021

New Features and Improvements

    (New Feature update): Tanzu Mission Control now supports the ability to register Tanzu Kubernetes Grid (1.3 & later) management clusters running in vSphere on Azure VMware Solution.

What's New April 30, 2021

New Features and Improvements

    (New Feature update): Tanzu Mission Control now supports the ability to register Tanzu Kubernetes Grid (1.2 & later) management clusters running in vSphere on VMware Cloud on AWS. For a list of supported environments, see Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in VMware Tanzu Mission Control Concepts.
Prerequisites

This first management cluster deployment is not supported by TMC, nor is it supported for a management cluster to deploy workload clusters across platforms. For example, a management cluster running in AWS does not have the capability to deploy workload clusters to VMC or AVS or Azure.

The following requirements are from the product documentation.

  • The management cluster must be deployed as a production cluster with multiple control plane nodes
    • However, in my demo lab I was able to successfully run this using a development deployment.
  • Tanzu Kubernetes Grid workload clusters need at least 4 CPUs and 8 GB of memory
    • Again, I deployed a small instance type (2 vCPU, 4GB RAM) and this didn’t seem to be an issue.
  • Tanzu Kubernetes Grid management clusters (version 1.3 or later) running in vSphere on Azure VMware Solution (AVS).
  • Tanzu Kubernetes Grid management clusters (version 1.2 or later) running in vSphere, including vSphere on VMware Cloud on AWS (version 1.12 or 1.14).
  • Do not attempt to register any other kind of management cluster with Tanzu Mission Control.
  • Tanzu Mission Control does not support the registration of Tanzu Kubernetes Grid management clusters prior to version 1.2.
Registering our Tanzu Kubernetes Grid Management Cluster
  • Go to Administration> Management Clusters > Register Management Cluster > Tanzu Kubernetes Grid

Tanzu Mission Control - Administration - Register Management Cluster - Tanzu Kubernetes Grid Continue reading Tanzu Mission Control – TKG Management support and provisioning new clusters

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters