Category Archives: Kubernetes

Tanzu Blog Logo Header

Tanzu Mission Control – Upgrading attached Tanzu Kubernetes Grid Clusters fails with error “updates to immutable fields are not allowed”

The Issue

When trying to upgrade an attached Tanzu Kubernetes Grid Cluster via Tanzu Mission Control (TMC), that is either created by a Tanzu Management Cluster, or via the Tanzu Kubernetes Grid Service (vSphere with Tanzu), the console gives you an error message similar to:

API Error: Failed to upgrade cluster: (target=mc:01G4BGAVKHHB6C3JJ5R0WA44NM, intentId=01G4CMP025ZHEBQ000E4SM996H): admission webhook "default.validating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: updates to immutable fields are not allowed (invalid argument)
I’ve captured some screenshots below of the process.
Tanzu Mission Control - Upgrade vSphere with Tanzu Cluster - Tanzu Kubernetes Grid Service  Tanzu Mission Control - Upgrade vSphere with Tanzu Cluster - Tanzu Kubernetes Grid Service - Upgrade Cluster
Tanzu Mission Control - Upgrade Cluster - Error Message - admission webhook default.validating.tanzukubernetescluster.run.tanzu.vmware.com denied the request

The Cause

Tanzu Mission Control doesn’t keep information about the Tanzu Clusters CNI configuration. Today, TMC doesn’t support upgrading clusters that are provisioned using Callico. This is not documented in the TMC Documentation.

If you provision a cluster using TMC, it will use the Antrea CNI, and you cannot change this.

Below you can see that my cluster was provisioned using the Callico CNI.

Tanzu Mission Control - Upgrade Cluster Fails - kubectl get tanzukuberntescluster

The Fix

Upgrade the Tanzu Cluster outside of Tanzu Mission Control.

Regards

Dean Lewis

Tanzu Blog Logo Header

Tanzu Kubernetes Grid – Manual Certificate Renewal

The Issue
Note: VMware has released a full in-depth KB Article that I'd advise you review along with this blog post. If you have any queries or concerns with the processes detailed, always open a support ticket!
- How to rotate certificates in a Tanzu Kubernetes Grid cluster (86251)

One day my Kubernetes cluster just stopped responding. I could no longer connect to the Kubernetes API.

I rebooted all the nodes (as it was a demo environment) to no luck, and still nothing. So I had to go off digging.

The Cause

I SSH’d into one of my control-plane nodes, and started to tail the kubelet logs. Continue reading Tanzu Kubernetes Grid – Manual Certificate Renewal

VMC Tanzu Header

VMware Cloud on AWS Deep Dive – Activating, Deploying and Using the managed Tanzu Kubernetes Grid Service

In this blog post I’m going to deep dive into the end-to-end activation, deployment, and consuming of the managed Tanzu Services (Tanzu Kubernetes Grid Service > TKGS) within a VMware Cloud on AWS SDDC. I’ll deploy a Tanzu Cluster inside a vSphere Namespace, and then deploy my trusty Pac-Man application and make it Publicly Accessible.

Previously to this capability, you would need to deploy Tanzu Kubernetes Grid to VMC, which was fully supported, as a Management Cluster and then additional Tanzu Clusters for your workloads. (See Terminology explanations here). This was a fully support option, however it did not provide you all the integrated features you could have by using the TKGS as part of your On-Premises vSphere environment.

What is Tanzu Services on VMC?

Tanzu Kubernetes Grid Service is a managed service built into the VMware Cloud on AWS vSphere environment.

This feature brings the availability of the integrated Tanzu Kubernetes Grid Service inside of vSphere itself, by coupling the platform together, you can easily deploy new Tanzu clusters, use the administration and authentication of vCenter, as well as provide governance and policies from vCenter as well.

Note: VMware Cloud on AWS does not enable activation of Tanzu Kubernetes Grid by default. Contact your account team for more information. 

Note2: In VMware Cloud on AWS, the Tanzu workload control plane can be activated only through the VMC Console.
But wait, couldn’t I already install a Tanzu Kubernetes Grid Cluster onto VMC anyway?

Tanzu Kubernetes Grid is a multi-cloud solution that deploys and manages Kubernetes clusters on your selected cloud provider. Previously to the vSphere integrated Tanzu offering for VMC that we are discussing today, you would deploy the general TKG option to your SDDC vCenter.

What differences should I know about this Tanzu Services offering in VMC versus the other Tanzu Kubernetes offering?
  • When Activated, Tanzu Kubernetes Grid for VMware Cloud on AWS is pre-provisioned with a VMC-specific content library that you cannot modify.
  • Tanzu Kubernetes Grid for VMware Cloud on AWS does not support vSphere Pods.
  • Creation of Tanzu Supervisor Namespace templates is not supported by VMware Cloud on AWS.
  • vSphere namespaces for Kubernetes releases are configured automatically during Tanzu Kubernetes Grid activation.
Activating Tanzu Kubernetes Grid Service in a VMC SDDC
Reminder: Tanzu Services Activation capabilities are not activated by default. Contact your account team for more information.

Within your VMC Console, you can either go via the Launchpad method or via the SDDC inventory item. I’ll cover both:

  • Click on Launchpad
  • Open the Kubernetes Tab
  • Click Learn More

VMC - Launchpad - Kubernetes

  • Select the Journey Tab
  • Under Stage 2 – Activate > Click Get Started

VMC - Launchpad - Kubernetes - Journey - Get started

Alternatively, from the SDDC object in the Inventory view

  • Click Actions
  • Click “Activate Tanzu Kubernetes Grid”

VMC - Inventory - SDDC - Activate Tanzu Kubernetes Grid

You will now be shown a status dialog, as VMC checks to ensure that Tanzu Kubernetes Grid Service can be activated in your cluster.

This will check you have the correct configurations and compute resources available.

VMC - Inventory - SDDC - Activate Tanzu Kubernetes Grid - Checking cluster resources

If the check is successful, you will now be presented the configuration wizard. Essentially, all you must provide is your configuration for four networks. Continue reading VMware Cloud on AWS Deep Dive – Activating, Deploying and Using the managed Tanzu Kubernetes Grid Service

Red Hat OpenShift Header

Openshift-install CLI Tool – Crash – Unable to decode instructions – Apple MacBook M1

The Issue

When running the OpenShift-Install CLI tool on my Apple MacBook M1 to create an OpenShift Cluster I kept hitting the same error:

assertion failed [inst.has.value()]: failed to decode instruction: 0x0

Openshift-install CLI Tool - Crash - Unable to decode instructions - Apple MacBook M1

The Cause

This is believed to be an issue created with the use of Rosetta 2 and Golang, and is somewhat documented on this GitHub issue by Apple Engineering.

The OpenShift-Install CLI Tool uses Terraform which relies on GoLang.

The Fix

In the above GitHub issue, it is found that running the below command either locally, or keeping it in your ~/.zshrc file will resolve the issue as a workaround.

export GODEBUG=asyncpreemptoff=1

Thank you to Andrew Sullivan from Red Hat, who pointed me to this blog post to help me find the answer!

Regards

Dean Lewis

AWS EKS Header

EKS – Kubectl – Unable to connect to the server – Exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

The Issue

After moving my life over to a new Macbook and installing the latest AWS CLI tools including “aws-iam-authenticator” tool, I couldn’t run commands against my EKS Clusters. I kept hitting the following issue;

> kubectl get pods

Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
eks - aws-iam-authenticator - unable to connect to cluster
The Cause

AWS updated the aws-iam-authenicator component in version 0.5.4 to require v1beta1 your kubeconfig file for the cluster context. You will be using v1alpha1 more than likely, which generates this error.

The Fix

Update your kubeconfig file as necessary, replacing “v1alpha1” for “v1beta1” for any contexts for EKS clusters.

vi ~/.kube/config

# Alernatively you could run something like the below to automate the changes. This will also create a "config.bak" file of the orignal file before the changes

sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config

eks - aws-iam-authenticator - v1alpha1 - v1beta1 - kubeconfig file

Below you can see I used the “sed” command, checked my file using “vi” then run the kubectl command successfully.

eks - aws-iam-authenticator - sed -i .bak -e 's:v1alpha1:v1beta1:' ~:.kube:config

Official GitHub Page

 

Regards

Dean Lewis