Tag Archives: Deploy

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

In this blog post, we will detail a full technical run through on how to deploy Tanzu Kubernetes Grid (TKG) into Microsoft Azure,

This will be using the new Tanzu CLI (version 1.3) (Previously TKG CLI) released in March 2021, to deploy  both a new Management Cluster and Guest Cluster.

Tanzu Kubernetes Grid Cluster Types

TKG has two types of clusters, for the full information of TKG Concepts, please read this post.

  • Management Cluster

This is the first architectural components to be deployed for creating a TKG instance. The management cluster is a dedicated cluster for management and operation of your whole TKG instance infrastructure. A management cluster will have Antrea networking enabled by default. This runs cluster API to create the additional clusters for your workloads to run, as well as the shared and in-cluster services for all clusters within the instance to use.

It is not recommended that the management cluster be used as a general-purpose compute environment for your application workloads.

  • Tanzu Kubernetes (Guest) Clusters

Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters can run different Kubernetes versions as required. These clusters use Antrea networking by default.

These clusters are referred to as Workload Clusters when working with the Tanzu CLI.

I sometimes use the term “Guest” for these clusters, as a cross-over with the vSphere with Tanzu architecture, which has similar concepts as above however uses the terms “Supervisor Cluster” and “Guest Cluster”.

Pre-Requisites

For this blog post, I’ll be deploying everything from my local Mac OS X machine. You will need the following:

  • Docker installed with Kubernetes enabled
    • For Windows and macOS Docker clients, you must allocate at least 6 GB of memory in Docker Desktop to accommodate the kind container. See Settings for Docker Desktop in the kind documentation.
  • Install the Tanzu CLI and the Kubectl tool > Instructions here.
    • If you have used the TKG CLI before, then this is now deprecated.
    • You can find a full command line reference for Tanzu CLI and a comparison of the TKG CLI commands in this documentation link.
  • Install the Azure CLI.
  •  Register a Tanzu Kubernetes Grid App on Azure
    • The full details in the VMware docs for deploying TKG to Azure can be found here.
Login to the Azure CLI and accept the VM EULA

Before we get started, we need to log into the Azure CLI and accept the EULA for the images used for TKG in Azure. These images are updated with each release of the Tanzu CLI (TKG CLI).

az login

az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription {subscription_id}
az loginaz vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription
Deploying a Management Cluster using the UI

From your terminal, run the following command:

tanzu management-cluster create --ui

tanzu management-cluster create --ui Continue reading Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

Folding@Home Header

Deploying the VMware Appliance for Folding@Home using Terraform

To simplify the deployment of Folding@Home appliances to vSphere environments, I have wrote a set of Terraform configuration files (script).

You will need two packages downloaded to your jump host.

And either download locally the VMware Folding@Home Appliance, or host it at remote location.

Use the git software to download my Terraform Git folder which contains the folder called Deploy-FAH.

git clone https://github.com/saintdle/Terraform.git

Move into the “Deploy-FAH” folder, and edit the terraform.tfvars file as needed;

cd Deploy-FAH
vi terraform.tfvars

Below is an example;

// Name of the vSphere server. E.g "vcsa.vmware.local"
vsphere_server = "vcenter.veducate.local"

// User on the vSphere server. E.g "[email protected]"
vsphere_user = "[email protected]"

// Password of the user on the vSphere server. E.g "password"
vsphere_password = "Password1234!"

// Name of the vSphere data center. E.g "datacenter"
vsphere_datacenter = "Datacenter"

// Name of the vSphere cluster. E.g "Cluster"
vsphere_cluster = "Cluster"

// Name or IP of the vSphere host in the cluster to deploy your VM to. E.g "esxi-01" or "192.168.1.20"
vsphere_host = "10.10.2.4"

// Name of the vSphere data store to use for the VMs. E.g "VSAN"
vsphere_datastore = "Datastore"

// Network to connect virtual machine
vm_network = "Freale_NW1"

// Number of instances to deploy
instance_count = 2

// VM Machine Name (an index will be appended i.e FAH-1, FAH-2,)
vm_name = "dean-test"

// Number of CPUs to set on deployed Virtual Machines
num_cpu = 2

// Memory to set on deployed Virtual Machines (in MB)
memory = 4096

// Name of vSphere Resouce Pool to be created. E.g "FAH-VMs"
vsphere_resource_pool = "dean-test"

// Name of VM folder to be created. E.g "FAH-VMs"
vsphere_vm_folder = "dean-test"

// Location of OVA file if using a local location - if using remote location, leave this as null
local_ovf_path = "/home/dean/Deploy-FAH-3/VMware-Appliance-FaH_1.0.4.ova"

// Location of OVA file if using a remote location - if using local location, leave this as null
remote_ovf_path =

// Enable SSH in FAH Appliance (True or False)
ssh_enable = "True"

// FAH appliance root password
root_password = "VMware1!"

// FAH Username you wish to be associated with in the statistics tables
fah_user = ""

// FAH Team you wish to be associated with in the statistics tables
fah_team = "52737"

// FAH Passkey to verify your user in the statistical tables (this is optional from FAH project)
fah_passkey = "unique_id"

That’s it, no more changes needed, it’s as simple as running the following to deploy your appliances;

#This will download the terraform providers as needed

terraform init

#This will show you the planned changes and make sure they are possible

terraform plan

#This will run the configuration to run the deployment

terraform apply

You can use the latest version of Terraform, version 0.13.5 as of the publishing of this post.

Quick notes

This terraform configuration uses some advance configuration in the folder “FAH-Appliance”, under the main.tf file. Here it reads the “remote_ovf_path” variable, and acts based on if it is null or not. If there is a variable set, then it runs the command to deploy from a remote location. If variable is null, then it looks to the “local_ovf_path”, and processes this to deploy an OVF/OVA from the local location.

  dynamic "ovf_deploy" {
  for_each = "${var.local_ovf_path}" != "" || "${var.remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.local_ovf_path}" != "" ? "${var.local_ovf_path}" : null
  remote_ovf_url = "${var.remote_ovf_path}" != "" ? "${var.remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "VM Network" = data.vsphere_network.network.id
    }
   }
  }

Thanks to Grant Orchard from HashiCorp helping me with this part of the config.

Interesting in where you can take this further, check out this post from Robert Jenson, using VMware CodeStream for an Infrastructure as Code deployment using GitHub as a source repository, and terraform for the deployment.

Regards