Tag Archives: vSphere

Learn Kubevirt - migrating from VMware - header image

Learn KubeVirt: Deep Dive for VMware vSphere Admins

As a vSphere administrator, you’ve built your career on understanding infrastructure at a granular level, datastores, DRS clusters, vSwitches, and HA configurations. You’re used to managing VMs at scale. Now, you’re hearing about KubeVirt, and while it promises Kubernetes-native VM orchestration, it comes with a caveat: Kubernetes fluency is required. This post is designed to bridge that gap, not only explaining what KubeVirt is, but mapping its architecture, operations, and concepts directly to vSphere terminology and experience. By the end, you’ll have a mental model of KubeVirt that relates to your existing knowledge.

What is KubeVirt?

KubeVirt is a Kubernetes extension that allows you to run traditional virtual machines inside a Kubernetes cluster using the same orchestration primitives you use for containers. Under the hood, it leverages KVM (Kernel-based Virtual Machine) and QEMU to run the VMs (more on that futher down).

Kubernetes doesn’t replace the hypervisor, it orchestrates it. Think of Kubernetes as the vCenter equivalent here: managing the control plane, networking, scheduling, and storage interfaces for the VMs, with KubeVirt as a plugin that adds VM resource types to this environment.

Tip: KubeVirt is under active development; always check latest docs for feature support.

Core Building Blocks of KubeVirt, Mapped to vSphere

KubeVirt Concept vSphere Equivalent Description
VirtualMachine (CRD) VM Object in vCenter The declarative spec for a VM in YAML. It defines the template, lifecycle behaviour, and metadata.
VirtualMachineInstance (VMI) Running VM Instance The live instance of a VM, created and managed by Kubernetes. Comparable to a powered-on VM object.
virt-launcher ESXi Host Process A pod wrapper for the VM process. Runs QEMU in a container on the node.
PersistentVolumeClaim (PVC) VMFS Datastore + VMDK Used to back VM disks. For live migration, either ReadWriteMany PVCs or RAW block-mode volumes are required, depending on the storage backend.
Multus + CNI vSwitch, Port Groups, NSX Provides networking to VMs. Multus enables multiple network interfaces. CNIs map to port groups.
Kubernetes Scheduler DRS Schedules pods (including VMIs) across nodes. Lacks fine-tuned VM-aware resource balancing unless extended.
Live Migration API vMotion Live migration of VMIs between nodes with zero downtime. Requires shared storage and certain flags.
Namespaces vApp / Folder + Permissions Isolation boundaries for VMs, including RBAC policies.

KVM + QEMU: The Hypervisor Stack

Continue reading Learn KubeVirt: Deep Dive for VMware vSphere Admins

VMware Change Block Tracking Issue - Header

vSphere data loss bug returns – CBT issues in vSphere ESXI 8.0 update 2

The Issue

I keep saying, there are no new ideas in technology, just re-hashes of old ones. That is also true for VMware and their data loss issues.

The vSphere-based change block tracking (CBT) bug is back! I think I wrote 5 articles on this back in 2014/2015 with explanations and fixes!

Veeam reported this at the start of week commencing 11th December 2023, with VMware confirming the issue by the end of the same week.

The Cause

Change block tracking is the feature used to see which blocks of data have changed since a known point in time, to enable backup software to capture only the incremental changes.

If this feature fails, you could lose data in your backups, as the backup software doesn’t know which blocks to protect.

as per VMware:

CBT's QueryChangedDiskAreas may lose some data changed on the disk after disk is hot-extended.
It only happens on ESXi 8.0u2.
The Fix/Workaround

Directly from VMware’s newly published KB, which took them only a few days to confirm this behaviour after Veeam noticed at the start of the week!

  • Resolution
    • Unfortunately, there is no fix available for this bug at this time. However, you can use the following workaround to work around the issue until a fix is released
  • Workaround
    1. Reset CBT after disk is hot-extended. Then, user need to take a full backup immediately.
      It does not fix existing backups, but it makes sure the new ones are good.
    2. Or, user extend disk in offline.

You cannot fix your existing incremental backups if they have been affected, if they missed the correct data to backup, it’s been missed! But you can run an Active Full backup to capture everything, certainly for Veeam this is the case, other backup vendors you’ll need to double check with!

How do I reset Change Block Tracking?

If you are using Veeam, you can just perform an Active Full backup, and ensure the reset CBT option is configured. This is enabled by default.

If you aren’t using Veeam, then the following will be your next steps.

To reset Change Block Tracking, as per this older VMware KB article from the last time this was an issue. VMware may update this article or produce another one now this recent bug has been found.

  • Find your VM in the vCenter Client
    • Power the VM off
    • Click the Options tab, select the Advanced section and then click Configuration Parameters.
  • Disable CBT for the virtual machine by setting the ctkEnabled value to false.
  • If you need to do this for specific virtual disks attached to your virtual machine
    • Disable CBT by configuring the scsix:x.ctkEnabled value for each attached virtual disk to false. (scsix:x is SCSI controller and SCSI device ID of your virtual disk.)
  • Ensure there are no snapshot files (.delta.vmdk) present in the virtual machine’s working directory. For more information, see Determining if there are leftover delta files or snapshots that VMware vSphere or Infrastructure Client cannot detect (1005049).
  • Delete any -CTK.VMDK files within the virtual machine’s working directory.

Now power on your virtual machine.

Depending on your backup software vendor, you may need to manually re-enable Change Block Tracking, you can find a full list of steps and considerations in this VMware KB article. It’s essentially power down the VM, enable in value again in configuration parameters.

Summary

Let’s hope VMware produces a fix for this quickly, I remember they had this issue in vSphere 5.5 and 6.0 and some fixes didn’t resolved the issue, it was a pain being a consultant having to install fixes at customers sites.

It’s good that VMware have only taken a short amount of time to validate this bug and publish something officially about it!

 

Regards

Dean Lewis

Tanzu Blog Logo Header

Tanzu Kubernetes Grid – Upgrading a Management and Workload Cluster deployed to vSphere

In this blog post, I am going to walk through how to upgrade both your Tanzu Kubernetes Grid Management and Workload clusters. I’ll cover the Tanzu CLI options, as well as how you can leverage the features of Tanzu Mission Control for upgrades as well.

For my example use cases, I’ll be upgrading from TKG 1.4.2 to 1.5.4. Although the process should be similar for other upgrade paths, I do recommend you consult the official documentation before attempting any upgrade in case there are any changes.

Caution: VMware recommends not installing or upgrading to Tanzu Kubernetes Grid v1.5.0-v1.5.3, due to a bug in the versions of etcd in the versions of Kubernetes used by Tanzu Kubernetes Grid v1.5.0-v1.5.3. Tanzu Kubernetes Grid v1.5.4 resolves this problem by incorporating a fixed version of etcd. For more information, see Resolved Issues in the TKG v1.5 Release Notes.
Pre-requisites

To upgrade Tanzu Kubernetes Grid (TKG), you download and install the new version of the Tanzu CLI on the machine that you use as the bootstrap machine. You must also download and install base image templates and VMs, depending on whether you are upgrading clusters that you previously deployed to vSphere, Amazon EC2, or Azure.

Download the Tanzu CLI and Kubernetes OVAs

On the VMware Customer Portal download both the Tanzu CLI and OVA files as necessary.

I’ve highlighted in the below screenshot; your Management Cluster will always need to run the latest Kubernetes version.

Tanzu Kubernetes Grid - Upgrade - Download Product files - Tanzu CLI - Kubernetes OVAS

Upload Kubernetes OVAs to vCenter

Continue reading Tanzu Kubernetes Grid – Upgrading a Management and Workload Cluster deployed to vSphere

Red Hat OpenShift + VMware Header

OpenShift 4.10 on VMware – Introducing the out-of-the-box vSphere CSI Driver installation

OpenShift Container Platform defaults to using an in-tree (non-CSI) plug-in to provision vSphere storage.

What’s New?

In OpenShift 4.9, the out-of-the-box installation of the vSphere CSI driver was tech preview. This has now moved to GA!

This means during an Installer-Provisioned-Installation cluster bring up, the vSphere CSI driver will be enabled.

This is part of the future “journey” of OpenShift to CSI drivers. As you may be aware, the original storage implementations “in-tree” drivers will be removed from future versions of Kubernetes, making way for the CSI Drivers, a better storage integration implementation.

OpenShift Storage - Journey to CSI

Therefore, the Red Hat team have been working with the upstream native vSphere CSI Driver, which is open-source and VMware Storage team, to integrating into the OpenShift installation.

The aim here is two-fold, take further advantage of the VMware platform, and to enable CSI Migration. So that is easier for customers to migrate their existing persistent data from in-tree provisioned storage constructs to CSI provisioned constructs.

How do I enable this?

Continue reading OpenShift 4.10 on VMware – Introducing the out-of-the-box vSphere CSI Driver installation

Red Hat OpenShift + VMware Header

OpenShift on VMware – Integrating with vSphere Storage, Networking and Monitoring.

I was honoured to be a guest on the “Ask an OpenShift Admin” webinar recently. Where I had the chance to talk about OpenShift on VMware, always a hot topic, and how we co-innovate and work together on solutions.

You can watch the full session below. Keep reading to see the content I didn’t get to cover on a separate recording I’ve produced.

Ask an OpenShift Admin (Ep 54): OpenShift on VMware and the vSphere Kubernetes Drivers Operator

However, I had a number of topics and demo’s planned, that we never got time to visit. So here is the full content I had prepared.

Some of the areas in this webinar and my additional session we covered were:

  • Answering questions live from the views (anything on the table)
  • OpenShift together with VMware
  • Common issues and best practices for deploying OpenShift on VMware vSphere
  • Consuming your vSphere Storage in OpenShift
  • Integrating with the VMware Network stack
  • Infrastructure Up Monitoring
OpenShift on VMware – Integrating with vSphere Storage, Networking and Monitoring

Resources

Regards

Dean Lewis