Category Archives: VMware

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

VMware Tanzu Header

vSphere with Tanzu – cidrBlocks intersects with the network range of the external ip pools

The Issue

When deploying a vSphere with Tanzu guest cluster via the command line, I hit the following error:

kubectl apply -f cluster.yaml

Error from server (spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools in network provider's configuration, spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools LB in network provider's configuration): 

error when creating "cluster.yaml": admission webhook "default.validating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools in network provider's configuration, spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools LB in network provider's configuration

The Cause

The default CIDR Block used by vSphere with Tanzu for the Pod Networking is 192.168.0.0/16 and for Services Networking is 10.96.0.0/12. ThereforeĀ if you have any over laps with this in your Workload Management setup, such as, in my case the Load Balancing configuration when integrating with NSX-T. You will end up with a failure.

Cluster - Namespace - Network - workload configuration

This will happen if you use a deployment YAML for your cluster such as the below, there is no pod/service networking settings specified, so the default is chosen.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: veducate-cluster
  namespace: deanl
spec:
  distribution:
    version: v1.18.15
  topology:
    controlPlane:
      class: best-effort-small
      count: 1
      storageClass: management-storage-policy-thin
    workers:
      class: best-effort-small
      count: 3
      storageClass: management-storage-policy-thin
  settings:
    network:
      cni:
        name: calico
    storage:
      defaultClass: management-storage-policy-thin
The Fix

Continue reading vSphere with Tanzu – cidrBlocks intersects with the network range of the external ip pools

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’

The issue

When deploying Tanzu Kubernetes Grid to AWS, the deployment was failing with the following output:

unable to set up management cluster, : unable to wait for cluster and get the cluster kubeconfig: error waiting for cluster to be provisioned (this may take a few minutes): cluster creation failed, reason:'InstanceProvisionFailed @ Machine/tkg-aws-mgmt-control-plane-dqb4v', message:'1 of 2 completed'
The Cause

When we reviewed the CAPA logs (Cluster API AWS provider) we found the following errors logged: Continue reading Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

vRealize Automation - Code Stream Header

vRA Code Stream – Preserving files and artifacts created in a CI Task

Whilst creating a pipeline and using CI Tasks to run some CLI tools, I needed to save the outputted files from the container used for the CI Task so I could use them once the pipeline is completed.

Code Stream has a feature for CI Tasks called “Preserve Artifacts” to enable this, where by files in your working directory are saved to the “/sharedPath” folder location of the Docker Host where your container runs.

Below I’m going to show you how to use this feature.

  • First on your pipeline configure a Working Directory

Code Stream - Preserve Artifacts - Pipeline - Workspace - Working Directory Continue reading vRA Code Stream – Preserving files and artifacts created in a CI Task