Category Archives: VMware

Tanzu Mission Control Header

Tanzu Mission Control – TKG Management support and provisioning new clusters

In this blog post, I am going to cover the new support for Tanzu Kubernetes Grid Management clusters on both VMware Cloud on AWS (VMC) and Azure VMware Solution (AVS). This functionality also allows the provisioning of new Tanzu Kubernetes workload clusters (TKC) to the relevant platform, provisioned by the lifecycle management controls within Tanzu Mission Control.

Below are the other blog posts I’ve wrote covering Tanzu Mission Control.

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
Release Notes

Below are the relevant release notes for the features I’ll cover. In this blog post, I’ll just be showing screenshots for a VMC environment, however the same applies to AVS as well.

What's New May 26, 2021

New Features and Improvements

    (New Feature update): Tanzu Mission Control now supports the ability to register Tanzu Kubernetes Grid (1.3 & later) management clusters running in vSphere on Azure VMware Solution.

What's New April 30, 2021

New Features and Improvements

    (New Feature update): Tanzu Mission Control now supports the ability to register Tanzu Kubernetes Grid (1.2 & later) management clusters running in vSphere on VMware Cloud on AWS. For a list of supported environments, see Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in VMware Tanzu Mission Control Concepts.
Prerequisites

This first management cluster deployment is not supported by TMC, nor is it supported for a management cluster to deploy workload clusters across platforms. For example, a management cluster running in AWS does not have the capability to deploy workload clusters to VMC or AVS or Azure.

The following requirements are from the product documentation.

  • The management cluster must be deployed as a production cluster with multiple control plane nodes
    • However, in my demo lab I was able to successfully run this using a development deployment.
  • Tanzu Kubernetes Grid workload clusters need at least 4 CPUs and 8 GB of memory
    • Again, I deployed a small instance type (2 vCPU, 4GB RAM) and this didn’t seem to be an issue.
  • Tanzu Kubernetes Grid management clusters (version 1.3 or later) running in vSphere on Azure VMware Solution (AVS).
  • Tanzu Kubernetes Grid management clusters (version 1.2 or later) running in vSphere, including vSphere on VMware Cloud on AWS (version 1.12 or 1.14).
  • Do not attempt to register any other kind of management cluster with Tanzu Mission Control.
  • Tanzu Mission Control does not support the registration of Tanzu Kubernetes Grid management clusters prior to version 1.2.
Registering our Tanzu Kubernetes Grid Management Cluster
  • Go to Administration> Management Clusters > Register Management Cluster > Tanzu Kubernetes Grid

Tanzu Mission Control - Administration - Register Management Cluster - Tanzu Kubernetes Grid Continue reading Tanzu Mission Control – TKG Management support and provisioning new clusters

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

VMware Tanzu Header

vSphere with Tanzu – cidrBlocks intersects with the network range of the external ip pools

The Issue

When deploying a vSphere with Tanzu guest cluster via the command line, I hit the following error:

kubectl apply -f cluster.yaml

Error from server (spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools in network provider's configuration, spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools LB in network provider's configuration): 

error when creating "cluster.yaml": admission webhook "default.validating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools in network provider's configuration, spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools LB in network provider's configuration

The Cause

The default CIDR Block used by vSphere with Tanzu for the Pod Networking is 192.168.0.0/16 . ThereforeĀ if you have any over laps with this in your Workload Management setup, such as, in my case the Load Balancing configuration when integrating with NSX-T. You will end up with a failure.

Cluster - Namespace - Network - workload configuration

This will happen if you use a deployment YAML for your cluster such as the below, there is no pod networking settings specified, so the default is chosen.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: veducate-cluster
  namespace: deanl
spec:
  distribution:
    version: v1.18.15
  topology:
    controlPlane:
      class: best-effort-small
      count: 1
      storageClass: management-storage-policy-thin
    workers:
      class: best-effort-small
      count: 3
      storageClass: management-storage-policy-thin
  settings:
    network:
      cni:
        name: calico
    storage:
      defaultClass: management-storage-policy-thin
The Fix

Continue reading vSphere with Tanzu – cidrBlocks intersects with the network range of the external ip pools

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’

The issue

When deploying Tanzu Kubernetes Grid to AWS, the deployment was failing with the following output:

unable to set up management cluster, : unable to wait for cluster and get the cluster kubeconfig: error waiting for cluster to be provisioned (this may take a few minutes): cluster creation failed, reason:'InstanceProvisionFailed @ Machine/tkg-aws-mgmt-control-plane-dqb4v', message:'1 of 2 completed'
The Cause

When we reviewed the CAPA logs (Cluster API AWS provider) we found the following errors logged: Continue reading Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control