VMware Tanzu Header

A guide to vSphere with Tanzu Kubernetes – Day 2 Operations for the VI Admin

Intro

This blog post is an accompaniment to the session “vSphere with Tanzu Kubernetes – Day 2 Operations for the VI Admin” created by myself and Simon Conyard, with special thanks to the VMware LiveFire Team for allowing us access to their lab environments to create the technical demo recordings.

You can see the full video with technical demos below (1hr 4 minutes). This blog post acts a supplement to the recording.


This session recording was first shown at the Canada VMUG Usercon.

  • You can watch the VMUG session on-demand here.
  • This session is 44 minutes long (and is a little shorter than the one above).

The basic premise of the presentation was set at around a level-100/150 introduction to the Kubernetes world and marrying that to your knowledge of VMware vSphere as a VI Admin. Giving you an insight into most of the common areas you will need to think about when all of a sudden you are asked to deploy Tanzu Kubernetes and support a team of developers.

Help I need somebody I am a Tanzu Admin

Scene Setting

So why are we talking about VMware and Kubernetes? Isn’t VMware the place where I run those legacy things called virtual machines?

Essentially the definition of an application has changed. On the left of the below image, we have the typical Application, we usually talk about the three tier model (Web, App, DB).

However, the landscape is moving towards the right hand side, applications running more like distributed systems. Where the data your need to function is being served, serviced, recorded, and presented not only by virtual machines, but Kubernetes services as well. Kubernetes introduces its own architectures and frameworks, and finally this new buzzword, serverless and functions.

Although you may not be seeing this change happen immediately in your workplace and infrastructure today. It is the direction of the industry.

Did you know, vRealize Automation 8 is built on a modern container-based micro-services architecture.

The defintion of an application has changed

VMware’s Kubernetes offerings

VMware has two core offerings;

  • vSphere Native
  • Multi-Cloud Aligned

Within vSphere there are two types of Kubernetes clusters that run natively within ESXi.

  • Supervisor Kubernetes cluster control plane for vSphere
  • Tanzu Kubernetes Cluster, sometimes also referred to as a “Guest Cluster.”

Supervisor Kubernetes Cluster

This is a special Kubernetes cluster that uses ESXi as its worker nodes instead of Linux.

This is achieved by integrating the Kubernetes worker agents, Spherelets, directly into the ESXi hypervisor. This cluster uses vSphere Pod Service to run container workloads natively on the vSphere host, taking advantage of the security, availability, and performance of the ESXi hypervisor.

The supervisor cluster is not a conformant Kubernetes cluster, by design, using Kubernetes to enhance vSphere. This ultimately provides you the ability to run pods directly on the ESXi host alongside virtual machines, and as the management of Tanzu Kubernetes Clusters.

Tanzu Kubernetes Cluster

To deliver Kubernetes clusters to your developers, that are standards aligned and fully conformant with upstream Kubernetes, you can use Tanzu Kubernetes Clusters (also referred to as “Guest” clusters.)

A Tanzu Kubernetes Cluster is a Kubernetes cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods.

As a fully upstream-compliant Kubernetes it is guaranteed to work with all your Kubernetes applications and tools. Tanzu Kubernetes Clusters in vSphere use the open source Cluster API project for lifecycle management, which in turn uses the VM Operator to manage the VMs that make up the cluster.

Supervisor Cluster or Tanzu Kubernetes Cluster, which one should I choose to run my application?

Supervisor Cluster:

Tanzu Kubernetes Cluster:

  • Kubernetes clusters that are fully conformant with upstream Kubernetes
  • Flexible cluster lifecycle management independent of vSphere, including upgrades
  • Ability to add or customize open source & ecosystem tools like Helm Charts
  • Broad support for open-source networking technologies such as Antrea

For further information check out the Whitepaper – VMware vSphere with Kubernetes 101

vSphere Native Deployment Options

The above information covers running Kubernetes on your vSphere platform natively. You can deploy as follows;

VMware Cloud Foundation is an integrated full stack solution, delivering customers a validated architecture bringing together vSphere, NSX for software defined networking, vSAN for software defined storage, and the vRealize Suite for Cloud Management automation and operation capabilities.

Deploying the vSphere Tanzu Kubernetes solution is as simple as a few clicks in a deployment wizard, providing you a fully integrated Kubernetes deployment into the VMware solutions.

Don’t have VCF? Then you can still enable Kubernetes yourself in your vSphere environment using vSphere 7.0 U1 and beyond. There will be extra steps for you to do this, and some of the integrations to the VMware software stack will not be automatic.

The below graphic summarises the deployment steps between both options discussed.

Enabling vSphere with Kubernetes

Multi-cloud Deployment Options

Building on top of the explanation of Tanzu Kubernetes Cluster explained earlier, Tanzu Kubernetes Grid (TKG) is the same easy-to-upgrade, conformant Kubernetes, with pre-integrated and validated components. This multi-cloud Kubernetes offering that you can run both on-premises in vSphere and in the public cloud on Amazon and Microsoft Azure, fully supported by VMware.

  • Tanzu Kubernetes Grid (TKG) is the name used for the deployment option which is multi-cloud focused.
  • Tanzu Kubernetes Cluster (TKC) is the name used for a Tanzu Kubernetes deployment deployed and managed by vSphere Namespace.

tkg platforms

Introducing vSphere Namespaces

When enabling Kubernetes within a vSphere environment a supervisor cluster is created within the VMware Data Center.  This supervisor cluster is responsible for managing all Kubernetes objects within the VMware Data Center, including vSphere Namespaces.  The supervisor cluster communicating with ESXi forms the Kubernetes control plane, for enabled clusters.

sddc running vsphere with kubernetes

A vSphere Namespace is a logical object that is created on the vSphere Kubernetes supervisor cluster.  This object tracks and provides a mechanism to edit the assignment of resources (Compute, Memory, Storage & Network) and access control to Kubernetes resources, such as containers or virtual machines.

You can provide the URL of the Kubernetes control plane to developers as required, where they can then deploy containers to the vSphere Namespaces for which they have permissions.

Resources and permissions are defined on a vSphere Namespace for both Kubernetes containers, consuming resources directly via vSphere, or Virtual Machines configured and provisioned to operate Tanzu Kubernetes Grid (TKG).

Access control

For a Virtual Administrator the way access can be assigned to various Tanzu elements within the Virtual Infrastructure is very similar to any other logical object.

  • Create Roles
  • Assign Permissions to the Role
  • Allocate the Role to Groups or Individuals
  • Link the Group or Individual to inventory objects

With Tanzu those inventory objects include Namespaces’ and Resources.

What I also wanted to highlight was if a Virtual Administrator gave administrative permissions to a Kubernetes cluster, then this has similarities to granting ‘root’ or ‘administrator’ access to a virtual machine.  An individual with these permissions could create and grant permissions themselves, outside of the virtual infrastructure.

Documentation

Access and RBAC

Storage

Rather than over do the storage piece, I’m going to link to some fantastic write ups by Cormac Hogan.

Essentially VMware has produced a Container Storage Interface driver that runs inside of your Tanzu cluster, and maps back to vCenter to enable the consumption of storage within the Kubernetes layer. You can take advantage of existing and new vSphere Storage Policies  and present them to your developers for consumption.

Storage CNS CSI

Networking

So, this is where the options can differ greatly depending on which offering you are deploying and what feature set you want to support and offer to your developers. The obvious standout here is using the NSX-T Network Container Plugin (NCP). This is provided as the default option when using VCF, but if you have NSX-T in your vSphere environment already, you can also integrate during a deploy using the vSphere with Tanzu route.

Networking

Image Control

Just like content libraries are used by the Virtual Administrator to control what templates are deployed into the virtual environment, Harbor can be used as an image registry to control the images that are available to be deployed into a Kubernetes environment.

To step back for a second. A container image registry is a service that stores container images. And a container image is the pre-configured application, function or ‘thing‘ that a developer wants to use.

Harbor is an open-source image registry, that secures images with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted.  The open-source project is available to deploy as needed or if working with VMware Cloud Foundation there is an edition deployable with a single click, which is fully integrated into the VMware stack.

Documentation

Guides

Image Control

Scale and Resource Management

For the Virtual Administrator the hierarchy and type of Kubernetes resource is important.  Differentiating between the different resource types is key in understanding how to manage or scale those resource.  When managing Kubernetes objects within a vSphere namespace adding resources might simply be achieved by re-configuring limits.  However, when managing Tanzu Kubernetes clusters, adding resource could require deploying new Virtual Machines.

Documentation

Resource Management
Monitoring

As a vSphere administrator in your environment, I’m going to assume that you have heard of vRealize Operations. For a few versions now, the product has had the ability to monitor Kubernetes environments via a management pack produced by VMware.

This management pack gives you the infrastructure up view from your vSphere platform into your Kubernetes environment, right down to the services, pods, and containers layers.

vROps k8s management pack

Fleet Management

Tanzu Mission Control (TMC) is a cloud offering, which gives you a single point of control, monitoring and policy management, regardless of the Kubernetes deployment and their location (e.g Tanzu Kubernetes Grid, OpenShift Container Platform, Azure Kubernetes to name but a few).

Key Capabilities;

  • Manage Kubernetes Cluster Lifecycle through the deployment and day 2 operations
  • Attach Clusters for centralized operations and management
  • Centralized policy management
  • Global visibility for diagnosing and troubleshooting issues with your Kubernetes clusters
  • Inspection run-books to validate the configuration of your clusters
  • Data protection using Project Velero

Fleet Management

You can follow the below links to blogs I’ve wrote on TMC.

- Getting Started Tanzu Mission Control
- - TMC Resource Hierarchy
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- Cluster Inspections 
- - What Inspections are available 
- - Performing Inspections 
- - Viewing Inspections 
- Workspaces and Policies 
- - Creating a workspace 
- - Creating a managed Namespace 
- - Policy Driven Cluster Management 
- - Creating Policies

If you want to try out TMC, then head over and check out the hands-on-lab.

Footnotes

How to Evaluate

I want to try this immediately, what do I do? Well I always point my customers straight at the hands on labs, they are free and simple to use.

Hands-On-Labs

Blog – In-Product Evaluation of vSphere with Tanzu

VMware Fling – Demo Appliance for Tanzu Kubernetes Grid

  • A Virtual Appliance that pre-bundles all required dependencies to help customers in learning and deploying standalone Tanzu Kubernetes Grid (TKG) clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 environment for Proof of Concept, Demo and Dev/Test purposes.

Home-Lab

Other items we did not have time to cover in the presentation, but you need to check out going forward?

I’d just like to conclude with a thank you to Simon Conyard, for being a co-host and co-author to this recording and blog post, the VMware LiveFire Team, and the number of people whom I asked to review and critic the blog post!

Regards

Dean Lewis
Simon Conyard
Blog

2 thoughts on “A guide to vSphere with Tanzu Kubernetes – Day 2 Operations for the VI Admin

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.