Tag Archives: Tanzu Mission Control

vRA GKE Header

vRealize Automation – Deploying a GKE Cluster with Code Stream, add to Tanzu Mission Control & Tanzu Service Mesh

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Google Kubernetes Clusters (GKE), register them as:

  • Kubernetes endpoints in vRA Cloud Assembly and Code Stream
  • An attached in Tanzu Mission Control
  • Onboard in Tanzu Service Mesh

This post mirrors my other blog posts following similar concepts:

Requirement

After covering EKS and AKS, I thought it was worthwhile to finish off the gang and deploy GKE clusters using Code Stream.

Building on my previous work, I also added in the extra steps to onboard this cluster into Tanzu Service Mesh as well.

High Level Steps
  • Create a Code Stream Pipeline
    • Create a Google GKE Cluster
    • Create GKE cluster as endpoint in both vRA Code Stream and Cloud Assembly
    • Register GKE cluster in Tanzu Mission Control
    • Onboard the cluster to Tanzu Service Mesh
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

Continue reading vRealize Automation – Deploying a GKE Cluster with Code Stream, add to Tanzu Mission Control & Tanzu Service Mesh

VMC Tanzu Header

VMware Cloud on AWS – Managed Tanzu Kubernetes Grid with Tanzu Mission Control

In my previous blog post, I detailed a full end to end guide in deploying and configurating the managed Tanzu Kubernetes Service offering as part of VMware Cloud on AWS (VMC), finishing with some example application deployments and configurations.

In this blog post, I am moving on to show you how to integrate this environment with Tanzu Mission Control, which will provide fleet management for your Kubernetes instances. I’ve wrote several blog posts on TMC previous which you can find below:

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application
Management with Tanzu Mission Control

The first step is to connect the Supervisor cluster running in VMC to our Tanzu Mission Control environment.

Connecting the Supervisor Cluster to TMC

Within the TMC console, go to:

  • Administration
  • Management Clusters
  • Register Management Cluster
    • Select “vSphere with Tanzu”

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster

On the Register Management Cluster page:

  • Set the friendly name for the cluster in TMC
  • Select the default cluster group for managed workload clusters to be added into
  • Set any description and labels as necessary

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Name and Assign

  • Proxy settings for a Supervisor Cluster running in VMC are not supported, so ignore Step 2.

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Proxy Configuration

  • Copy the registration URL.

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Register

  • Log into your vSphere with Tanzu Supervisor cluster.
  • Find the namespace that identifies your cluster and is used for TMC configurations, “kubectl get ns”
    • It will start “svc-tmc-xx”
    • Copy this namespace name

Managed Tanzu Kubernetes Service - VMC - TMC - Supervisor Cluster - Kubectl get namespace Continue reading VMware Cloud on AWS – Managed Tanzu Kubernetes Grid with Tanzu Mission Control

Tanzu Blog Logo Header

Tanzu Mission Control – Upgrading attached Tanzu Kubernetes Grid Clusters fails with error “updates to immutable fields are not allowed”

The Issue

When trying to upgrade an attached Tanzu Kubernetes Grid Cluster via Tanzu Mission Control (TMC), that is either created by a Tanzu Management Cluster, or via the Tanzu Kubernetes Grid Service (vSphere with Tanzu), the console gives you an error message similar to:

API Error: Failed to upgrade cluster: (target=mc:01G4BGAVKHHB6C3JJ5R0WA44NM, intentId=01G4CMP025ZHEBQ000E4SM996H): admission webhook "default.validating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: updates to immutable fields are not allowed (invalid argument)
I’ve captured some screenshots below of the process.
Tanzu Mission Control - Upgrade vSphere with Tanzu Cluster - Tanzu Kubernetes Grid Service  Tanzu Mission Control - Upgrade vSphere with Tanzu Cluster - Tanzu Kubernetes Grid Service - Upgrade Cluster
Tanzu Mission Control - Upgrade Cluster - Error Message - admission webhook default.validating.tanzukubernetescluster.run.tanzu.vmware.com denied the request

The Cause

Tanzu Mission Control doesn’t keep information about the Tanzu Clusters CNI configuration. Today, TMC doesn’t support upgrading clusters that are provisioned using Callico. This is not documented in the TMC Documentation.

If you provision a cluster using TMC, it will use the Antrea CNI, and you cannot change this.

Below you can see that my cluster was provisioned using the Callico CNI.

Tanzu Mission Control - Upgrade Cluster Fails - kubectl get tanzukuberntescluster

The Fix

Upgrade the Tanzu Cluster outside of Tanzu Mission Control.

Regards

Dean Lewis

Tanzu Mission Control Header

Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

The Issue

A while ago I was chatting to Michael Cade, and we pondered the question “How do we ensure Kasten is protecting a newly deployed application in our Kubernetes environment”.

We chatted about how one of the best ways to make your Kasten protection policy flexible is by using metadata labels.

We came up with the simple idea: “What if something forces a known label on the metadata of any applications deployed by our developers in the future?”

This blog post covers this use case using Tanzu Mission Control with custom policies.

The Solution

One of the products we can use to enforce labels on a Kubernetes resource is Open Policy Agent Gatekeeper. Which is an external admission controller which allows you to create policies for the admission of resource creation/changes/updates based on a criteria.

  • OPA policies are expressed in a high-level declarative language called Rego. (Pronounced “ray-go”.)

Tanzu Mission Control, the fleet management SaaS tool for managing your Kubernetes platforms, provides you the ability to create policies of various types to manage the operation and security posture of your Kubernetes clusters and other organizational objects, implemented by using the OPA Gatekeeper.

Implementing The Solution

For this solution “art of the possible” blog post, we are going to keep it really simple, and implement a policy which covers the following: Continue reading Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream