Tag Archives: Deploy

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

In this blog post, we will detail a full technical run through on how to deploy Tanzu Kubernetes Grid (TKG) into Microsoft Azure,

This will be using the new Tanzu CLI (version 1.3) (Previously TKG CLI) released in March 2021, to deploy  both a new Management Cluster and Guest Cluster.

Tanzu Kubernetes Grid Cluster Types

TKG has two types of clusters, for the full information of TKG Concepts, please read this post.

  • Management Cluster

This is the first architectural components to be deployed for creating a TKG instance. The management cluster is a dedicated cluster for management and operation of your whole TKG instance infrastructure. A management cluster will have Antrea networking enabled by default. This runs cluster API to create the additional clusters for your workloads to run, as well as the shared and in-cluster services for all clusters within the instance to use.

It is not recommended that the management cluster be used as a general-purpose compute environment for your application workloads.

  • Tanzu Kubernetes (Guest) Clusters

Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters can run different Kubernetes versions as required. These clusters use Antrea networking by default.

These clusters are referred to as Workload Clusters when working with the Tanzu CLI.

I sometimes use the term “Guest” for these clusters, as a cross-over with the vSphere with Tanzu architecture, which has similar concepts as above however uses the terms “Supervisor Cluster” and “Guest Cluster”.

Pre-Requisites

For this blog post, I’ll be deploying everything from my local Mac OS X machine. You will need the following:

  • Docker installed with Kubernetes enabled
    • For Windows and macOS Docker clients, you must allocate at least 6 GB of memory in Docker Desktop to accommodate the kind container. See Settings for Docker Desktop in the kind documentation.
  • Install the Tanzu CLI and the Kubectl tool > Instructions here.
    • If you have used the TKG CLI before, then this is now deprecated.
    • You can find a full command line reference for Tanzu CLI and a comparison of the TKG CLI commands in this documentation link.
  • Install the Azure CLI.
  •  Register a Tanzu Kubernetes Grid App on Azure
    • The full details in the VMware docs for deploying TKG to Azure can be found here.
Login to the Azure CLI and accept the VM EULA

Before we get started, we need to log into the Azure CLI and accept the EULA for the images used for TKG in Azure. These images are updated with each release of the Tanzu CLI (TKG CLI).

az login

az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription {subscription_id}
az loginaz vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot20dot4-ubuntu-2004 --subscription
Deploying a Management Cluster using the UI

From your terminal, run the following command:

tanzu management-cluster create --ui

tanzu management-cluster create --ui Continue reading Deploying Tanzu Kubernetes Grid Management Cluster to Microsoft Azure

Folding@Home Header

Deploying the VMware Appliance for [email protected] using Terraform

To simplify the deployment of [email protected] appliances to vSphere environments, I have wrote a set of Terraform configuration files (script).

You will need two packages downloaded to your jump host.

And either download locally the VMware [email protected] Appliance, or host it at remote location.

Use the git software to download my Terraform Git folder which contains the folder called Deploy-FAH.

git clone https://github.com/saintdle/Terraform.git

Move into the “Deploy-FAH” folder, and edit the terraform.tfvars file as needed;

cd Deploy-FAH
vi terraform.tfvars

Below is an example;

// Name of the vSphere server. E.g "vcsa.vmware.local"
vsphere_server = "vcenter.veducate.local"

// User on the vSphere server. E.g "administrator@vsphere.local"
vsphere_user = "administrator@vsphere.local"

// Password of the user on the vSphere server. E.g "password"
vsphere_password = "Password1234!"

// Name of the vSphere data center. E.g "datacenter"
vsphere_datacenter = "Datacenter"

// Name of the vSphere cluster. E.g "Cluster"
vsphere_cluster = "Cluster"

// Name or IP of the vSphere host in the cluster to deploy your VM to. E.g "esxi-01" or "192.168.1.20"
vsphere_host = "10.10.2.4"

// Name of the vSphere data store to use for the VMs. E.g "VSAN"
vsphere_datastore = "Datastore"

// Network to connect virtual machine
vm_network = "Freale_NW1"

// Number of instances to deploy
instance_count = 2

// VM Machine Name (an index will be appended i.e FAH-1, FAH-2,)
vm_name = "dean-test"

// Number of CPUs to set on deployed Virtual Machines
num_cpu = 2

// Memory to set on deployed Virtual Machines (in MB)
memory = 4096

// Name of vSphere Resouce Pool to be created. E.g "FAH-VMs"
vsphere_resource_pool = "dean-test"

// Name of VM folder to be created. E.g "FAH-VMs"
vsphere_vm_folder = "dean-test"

// Location of OVA file if using a local location - if using remote location, leave this as null
local_ovf_path = "/home/dean/Deploy-FAH-3/VMware-Appliance-FaH_1.0.4.ova"

// Location of OVA file if using a remote location - if using local location, leave this as null
remote_ovf_path =

// Enable SSH in FAH Appliance (True or False)
ssh_enable = "True"

// FAH appliance root password
root_password = "VMware1!"

// FAH Username you wish to be associated with in the statistics tables
fah_user = ""

// FAH Team you wish to be associated with in the statistics tables
fah_team = "52737"

// FAH Passkey to verify your user in the statistical tables (this is optional from FAH project)
fah_passkey = "unique_id"

That’s it, no more changes needed, it’s as simple as running the following to deploy your appliances;

#This will download the terraform providers as needed

terraform init

#This will show you the planned changes and make sure they are possible

terraform plan

#This will run the configuration to run the deployment

terraform apply

You can use the latest version of Terraform, version 0.13.5 as of the publishing of this post.

Quick notes

This terraform configuration uses some advance configuration in the folder “FAH-Appliance”, under the main.tf file. Here it reads the “remote_ovf_path” variable, and acts based on if it is null or not. If there is a variable set, then it runs the command to deploy from a remote location. If variable is null, then it looks to the “local_ovf_path”, and processes this to deploy an OVF/OVA from the local location.

  dynamic "ovf_deploy" {
  for_each = "${var.local_ovf_path}" != "" || "${var.remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.local_ovf_path}" != "" ? "${var.local_ovf_path}" : null
  remote_ovf_url = "${var.remote_ovf_path}" != "" ? "${var.remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "VM Network" = data.vsphere_network.network.id
    }
   }
  }

Thanks to Grant Orchard from HashiCorp helping me with this part of the config.

Interesting in where you can take this further, check out this post from Robert Jenson, using VMware CodeStream for an Infrastructure as Code deployment using GitHub as a source repository, and terraform for the deployment.

Regards

Folding@Home Header

How to deploy the VMware Appliance for [email protected]

In this blog post we will go through the steps to deploy the VMware Appliance for [email protected] to;

  • vCenter
  • Standalone ESXi host
  • VMware Fusion/Workstation

And also cover some basic troubleshooting.

Getting started with the VMware [email protected] Appliance (vBrownBag Recording)

Deploy the VMware Appliance for [email protected] to vCenter

Continue reading How to deploy the VMware Appliance for [email protected]

OpenShift

How to specify your vSphere virtual machine resources when deploying Red Hat OpenShift

When deploying Red Hat OpenShift to VMware vSphere platform, there are two methods:

  • User Provisioned Infrastructure (UPI)
  • Installer Provisioned Infrastructure (IPI)

There are several great blogs covering both options and deployment methods.

In this blog, we are going to use the IPI method but customize the settings of our Virtual Machines that are deployed setting CPU and Memory that is different from the default settings.

Getting Started
Setting up your Jump host Machine

I’ll be using an Ubuntu Machine as my jumphost for the deployment.

Download the OpenShift-Install tool and OC command line tool. (I’ve used version 4.6.4 in my install)

Extract the files and copy to your /usr/bin/local directory

tar -zxvf openshift-client-linux.tar.gz
tar -zxvf openshift-install-linux.tar.gz

Have an available SSH key from your jump box, so that you can connect to your CoreOS VMs one they are deployed for troubleshooting purposes.

You need to download the vCenter trusted root certificates from your instance and import them to your Jump Host.

Curl -O https://{vCenter_FQDN}/certs/download.zip

Then the following to import (ubuntu uses the .crt files, hence importing the win folder);

unzip download.zip
cp certs/win/* /usr/local/share/ca-certificates
update-ca-certificates

You will need an account to connect to vCenter with the correct permissions for the OpenShift-Install to deploy the cluster. If you do not want to use an existing account and permissions, you can use this PowerCLI script to create the roles with the correct privileges based on the Red Hat documentation.

If you are installing into VMware Cloud on AWS, like myself, you will also need to allow connectivity from your segments as follows:

  • Compute gateway
    • OCP Cluster network to the internet
    • OCP Cluster network to your SDDC Management Network
  • Management gateway
    • OCP Cluster network to ESXi – HTTPs traffic

DNS Records – You will need the two following records to be available on your OCP Cluster network in the same IP address space that your nodes will be deployed to.

  • {clusterID}.{domain_name}
    • example: ocp46.veducate.local
  • *.apps.{clusterID}.{domain_name}
    • example: *.apps.ocp46.veducate.local

If your DNS is a Windows server, you can use this script here. Continue reading How to specify your vSphere virtual machine resources when deploying Red Hat OpenShift

2015 04 20 22 05 11

Deploy a Cisco UCS system – Part 4 – Upgrading the Firmware

My previous posts in the series covered getting the Cisco UCS up and running and into production, and it seems that adding how to upgrade the Firmware on the UCS at the end of the series is best, as you will find yourself needing to do this once the system is in production as well.

Note: Many thanks to Rene again for this simple post helping me through the steps.

Note2: It’s also worth looking through this short article on the do’s and don’t of UCS Firmware updates, from a session held at Cisco Live 2014.

Covered in this post;

  • Pre-reqs
  • Getting the Firmware
  • Upload firmware into UCS Manager
  • Upgrading UCS Manager
  • Upgrading the Fabric Interconnects
  • Upgrading the Blade Servers
Pre-Reqs

Continue reading Deploy a Cisco UCS system – Part 4 – Upgrading the Firmware