You will need an account to connect to vCenter with the correct permissions for the OpenShift-Install to deploy the cluster. If you do not want to use an existing account and permissions, you can use this PowerCLI script to create the roles with the correct privileges based on the Red Hat documentation.
Working with Red Hat OpenShift on vSphere, I’m really starting to understand the main infrastructure components and how everything fits together.
Next up was understanding how to control the cluster size after initial deployment. So, with Red Hat OpenShift, there are some basic concepts we need to understand first, before we jump into the technical how-to’s below in this blog.
In this blog I will cover the following;
- Understanding the concepts behind controlling Machines in OpenShift
- Editing your MachineSet to control your Virtual Machine Resources
- Editing your MachineSet to scale your cluster manually
- Deleting a node
- Configuring ClusterAutoscaler to automatically scale your environment
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
The Machine API performs all node host provisioning management actions as a post cluster installation method, providing you dynamic provisioning on top of your VMware vSphere platform (and other public/private cloud platforms).
The two primary resources are:
An object that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.
These custom resources add capabilities to your OpenShift cluster:
This resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.
This resource is based on the upstream ClusterAutoscaler project. In the OpenShift Container Platform implementation, this is integrated with the Machine API by extending the MachineSet API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, etc. You can configure priorities so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the ScalingPolicy, so that for example, you can scale up nodes but not scale down the node count.
This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, creates a new machine. You can read more here about this technology preview feature in OCP 4.6.
Editing your MachineSet to control your Virtual Machine Resources
This blog post is an accompaniment to the session “vSphere with Tanzu Kubernetes – Day 2 Operations for the VI Admin” created by myself and Simon Conyard, with special thanks to the VMware LiveFire Team for allowing us access to their lab environments to create the technical demo recordings.
You can see the full video with technical demos below (1hr 4 minutes). This blog post acts a supplement to the recording.
This session is 44 minutes long (and is a little shorter than the one above).
The basic premise of the presentation was set at around a level-100/150 introduction to the Kubernetes world and marrying that to your knowledge of VMware vSphere as a VI Admin. Giving you an insight into most of the common areas you will need to think about when all of a sudden you are asked to deploy Tanzu Kubernetes and support a team of developers.
So why are we talking about VMware and Kubernetes? Isn’t VMware the place where I run those legacy things called virtual machines?
Essentially the definition of an application has changed. On the left of the below image, we have the typical Application, we usually talk about the three tier model (Web, App, DB).
However, the landscape is moving towards the right hand side, applications running more like distributed systems. Where the data your need to function is being served, serviced, recorded, and presented not only by virtual machines, but Kubernetes services as well. Kubernetes introduces its own architectures and frameworks, and finally this new buzzword, serverless and functions.
Although you may not be seeing this change happen immediately in your workplace and infrastructure today. It is the direction of the industry.
Within vSphere there are two types of Kubernetes clusters that run natively within ESXi.
Supervisor Kubernetes cluster control plane for vSphere
Tanzu Kubernetes Cluster, sometimes also referred to as a “Guest Cluster.”
Supervisor Kubernetes Cluster
This is a special Kubernetes cluster that uses ESXi as its worker nodes instead of Linux.
This is achieved by integrating the Kubernetes worker agents, Spherelets, directly into the ESXi hypervisor. This cluster uses vSphere Pod Service to run container workloads natively on the vSphere host, taking advantage of the security, availability, and performance of the ESXi hypervisor.
To learn more about how vSphere delivers Kubernetes natively
The supervisor cluster is not a conformant Kubernetes cluster, by design, using Kubernetes to enhance vSphere. This ultimately provides you the ability to run pods directly on the ESXi host alongside virtual machines, and as the management of Tanzu Kubernetes Clusters.
Tanzu Kubernetes Cluster
To deliver Kubernetes clusters to your developers, that are standards aligned and fully conformant with upstream Kubernetes, you can use Tanzu Kubernetes Clusters (also referred to as “Guest” clusters.)
A Tanzu Kubernetes Cluster is a Kubernetes cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods.
As a fully upstream-compliant Kubernetes it is guaranteed to work with all your Kubernetes applications and tools. Tanzu Kubernetes Clusters in vSphere use the open source Cluster API project for lifecycle management, which in turn uses the VM Operator to manage the VMs that make up the cluster.
Supervisor Cluster or Tanzu Kubernetes Cluster, which one should I choose to run my application?
Has additional capabilities that are inherent in the vSphere environment and are available to Kubernetes via the kubectl command
Provides the ability to manage containers just as you would manage virtual machines
Provides stronger security and resource isolation due to the use of vSphere Pods
VMware Cloud Foundation is an integrated full stack solution, delivering customers a validated architecture bringing together vSphere, NSX for software defined networking, vSAN for software defined storage, and the vRealize Suite for Cloud Management automation and operation capabilities.
Deploying the vSphere Tanzu Kubernetes solution is as simple as a few clicks in a deployment wizard, providing you a fully integrated Kubernetes deployment into the VMware solutions.
Don’t have VCF? Then you can still enable Kubernetes yourself in your vSphere environment using vSphere 7.0 U1 and beyond. There will be extra steps for you to do this, and some of the integrations to the VMware software stack will not be automatic.
The below graphic summarises the deployment steps between both options discussed.
Building on top of the explanation of Tanzu Kubernetes Cluster explained earlier, Tanzu Kubernetes Grid (TKG) is the same easy-to-upgrade, conformant Kubernetes, with pre-integrated and validated components. This multi-cloud Kubernetes offering that you can run both on-premises in vSphere and in the public cloud on Amazon and Microsoft Azure, fully supported by VMware.
Tanzu Kubernetes Grid (TKG) is the name used for the deployment option which is multi-cloud focused.
Tanzu Kubernetes Cluster (TKC) is the name used for a Tanzu Kubernetes deployment deployed and managed by vSphere Namespace.
Introducing vSphere Namespaces
When enabling Kubernetes within a vSphere environment a supervisor cluster is created within the VMware Data Center. This supervisor cluster is responsible for managing all Kubernetes objects within the VMware Data Center, including vSphere Namespaces. The supervisor cluster communicating with ESXi forms the Kubernetes control plane, for enabled clusters.
A vSphere Namespace is a logical object that is created on the vSphere Kubernetes supervisor cluster. This object tracks and provides a mechanism to edit the assignment of resources (Compute, Memory, Storage & Network) and access control to Kubernetes resources, such as containers or virtual machines.
You can provide the URL of the Kubernetes control plane to developers as required, where they can then deploy containers to the vSphere Namespaces for which they have permissions.
Resources and permissions are defined on a vSphere Namespace for both Kubernetes containers, consuming resources directly via vSphere, or Virtual Machines configured and provisioned to operate Tanzu Kubernetes Grid (TKG).
For a Virtual Administrator the way access can be assigned to various Tanzu elements within the Virtual Infrastructure is very similar to any other logical object.
Assign Permissions to the Role
Allocate the Role to Groups or Individuals
Link the Group or Individual to inventory objects
With Tanzu those inventory objects include Namespaces’ and Resources.
What I also wanted to highlight was if a Virtual Administrator gave administrative permissions to a Kubernetes cluster, then this has similarities to granting ‘root’ or ‘administrator’ access to a virtual machine. An individual with these permissions could create and grant permissions themselves, outside of the virtual infrastructure.
I had someone query the below metrics, and the answer although easy to assume, is not clearly written down and within vROPs you don’t get a description either, so I thought I’d also publish it, in case any inquisitive minds go googling.
Guest|Page In Rate
The Rate the Guest OS brings memory back from disk to DIMM per second. Basically, the rate of reads going through paging/cache system.
It includes not just swapfile I/O, but cacheable reads as well (double pages/s). A page that was paged out earlier, has to be brought back first before it can be used. This creates performance issue as the application is waiting longer, as disk is much slower than RAM.
The unit is in number of pages, not MB. It’s not possible to convert due to mix use of Large Page (2 MB) and Page (4 KB).
A process can have concurrent mixed usage of Large and non-Large page in Windows. The page size isn’t a system-wide setting that all processes use. The same is likely true for Linux Huge Pages.
Page Input/sec counter
Pages Input/sec is the rate at which pages are read from disk to resolve hard page faults. Hard page faults occur when a process refers to a page in virtual memory that is not in its working set or elsewhere in physical memory, and must be retrieved from disk. When a page is faulted, the system tries to read multiple contiguous pages into memory to maximize the benefit of the read operation. Compare the value of Memory\\Pages Input/sec to the value of Memory\\Page Reads/sec to determine the average number of pages read into memory during each read operation.
The opposite of the above. This is not as important as the above. Just because a block of memory is moved to disk that does not mean the application experiences memory problem. In many cases, the page that was moved out is the idle page. Windows does not page out any Large Pages.
Page Output/sec counter
Pages Output/sec is the rate at which pages are written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows writes more pages back to disk to free up space when physical memory is in short supply. This counter shows the number of pages, and can be compared to other counts of pages, without conversion.
Pages Swapped Out counter
Page in/out rate includes pages written/read to/from swap file as well as other system files.
It is important to remember these metrics are populated by pulling the data from the performance counters of the Guest OS, hence the need for VMTools. These metrics should not be confused with virtual machine metrics, which are based on the activity of the VM at the vSphere level. Therefore not taking into account what is going on inside the guest itself.
Thanks to Iwan “E1” Rahbook blog post here for helping me figure this out as well.