Tag Archives: Tanzu Mission Control

Tanzu Mission Control Header

Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

The Issue

A while ago I was chatting to Michael Cade, and we pondered the question “How do we ensure Kasten is protecting a newly deployed application in our Kubernetes environment”.

We chatted about how one of the best ways to make your Kasten protection policy flexible is by using metadata labels.

We came up with the simple idea: “What if something forces a known label on the metadata of any applications deployed by our developers in the future?”

This blog post covers this use case using Tanzu Mission Control with custom policies.

The Solution

One of the products we can use to enforce labels on a Kubernetes resource is Open Policy Agent Gatekeeper. Which is an external admission controller which allows you to create policies for the admission of resource creation/changes/updates based on a criteria.

  • OPA policies are expressed in a high-level declarative language called Rego. (Pronounced “ray-go”.)

Tanzu Mission Control, the fleet management SaaS tool for managing your Kubernetes platforms, provides you the ability to create policies of various types to manage the operation and security posture of your Kubernetes clusters and other organizational objects, implemented by using the OPA Gatekeeper.

Implementing The Solution

For this solution “art of the possible” blog post, we are going to keep it really simple, and implement a policy which covers the following: Continue reading Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

vRA EKS Tanzu Mission Control Header

Using vRA to deploy AWS EKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native AWS EKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream, and provide additional functions on making these EKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create a AWS EKS Cluster
    • Create EKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register EKS cluster in Tanzu Mission Control
Pre-Requisites
  • vRA Cloud access
    • The pipeline can be changed easily for use with vRA on-prem
  • AWS Account that can provision EKS clusters
  • A Docker host to be used by Code Stream
  • Tanzu Mission Control account that can register new clusters
  • VMware Cloud Console Tokens for vRA Cloud and Tanzu Mission Control API access
  • The configuration files for the pipeline can be found in this GitHub repository
Creating a Code Stream Pipeline to deploy a AWS EKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy AWS EKS Clusters and register with Tanzu Mission Control

VMware Tanzu Header

Tanzu Mission Control – Delete a provisioned cluster

In this blog post we are going to cover off how to delete a Tanzu Kubernetes Grid cluster that has been provisioned by Tanzu Mission Control. We will cover the following areas:

Below are the other blog posts in the series.

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection

We are going to use the cluster I created in my last blog post.

Below are my EC2 instances that make up my TMC provisioned cluster, here I have filtered my view using the field “tmc.cloud.vmware.com/cluster” + cluster name.

Tanzu Mission Control - AWS Consoles - Instances - Filtered tmc.cloud.vmware.com

Deleting a Provisioned cluster in the TMC UI

In the TMC UI, going to the clusters view, you can click the three dots next to the cluster you want to remove and select delete.

Tanzu Mission Control - Clusters - Delete cluster

Alternatively, within the cluster object view, click actions then delete.

Tanzu Mission Control - Cluster Object - Delete cluster

Both options will bring up the below confirmation dialog box.

You select one of the following options:

  • Delete and remove agent (recommended)
    • Remove from TMC and delete agent extensions
  • Manually delete agent extensions
    • A secondary option whereby a manual removal is needed if a cluster delete fails

Enter the name of the cluster you want to delete, to confirm the cluster deletion.

Tanzu Mission Control - Cluster Object - Delete cluster - Confirm Continue reading Tanzu Mission Control – Delete a provisioned cluster