Category Archives: VMware

Tanzu Blog Logo Header

Tanzu Kubernetes Grid 1.6 – Management Cluster deployment failure – unable to patch the cluster object

The Issue

When deploying a brand new Tanzu Kubernete Grid Management Cluster to a vSphere environment we kept hitting failures like the below. The deployment was very vanilla with the default settings, no extra metadata inputted into the build.

!! [1223 15:26:17.84239]: init.go:732] Failure while deploying management cluster, Here are some steps to investigate the cause:
!! [1223 15:26:17.84256]: init.go:733] Debug:
!! [1223 15:26:17.84262]: init.go:734] kubectl get po,deploy,cluster,kubeadmcontrolplane,machine,machinedeployment -A --kubeconfig /home/michael/.kube-tkg/tmp/config_Qd01VhPd
!! [1223 15:26:17.84272]: init.go:735] kubectl logs deployment.apps/ -n  manager --kubeconfig /home/michael/.kube-tkg/tmp/config_Qd01VhPd
!! [1223 15:26:17.84278]: init.go:738] To clean up the resources created by the management cluster:
!! [1223 15:26:17.84283]: init.go:739] tanzu management-cluster delete
✘ [1223 15:26:17.84291]: init.go:91] unable to set up management cluster, : unable to patch cluster object: unable to patch optional metadata under labels: unable to patch the management cluster object with optional metadata: unable to patch the cluster object: error while applying patch for "&TypeMeta{Kind:,APIVersion:,}" tkg-system/tkg-mgmt-vsphere-20221223151757: Cluster.cluster.x-k8s.io "tkg-mgmt-vsphere-20221223151757" is invalid: [metadata.labels: Invalid value: "": name part must be non-empty, metadata.labels: Invalid value: "": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')]

The Cause

The tooling creates an erronous value in the cluster config file, which causes the build error.

The Fix

Search for the latest yaml file created in:

~/.config/tanzu/tkg/clusterconfigs/

and comment out the following line:

CLUSTER_LABELS: :,

# The line will now look like this:

#CLUSTER_LABELS: :,

Now re-run the creation of your cluster using the CLI

tanzu mc create --file {file_name.yaml}

Regards

Dean Lewis

Tanzu Kubernetes Grid Cilium Header

How to Deploy a Tanzu Kubernetes Grid cluster using the Cilium CNI

In this blog post I’m going to dive into how you can create a Tanzu Kubernetes Grid cluster and specify your own container network interface, for example, Cilium. Expanding on the installation, I’ll also cover installing a load balancer service, deploying a demo app, and showing some of the observability feature as well.

What is Cilium?
Cilium is an open source software for providing, securing and observing network connectivity between container workloads - cloud native, and fueled by the revolutionary Kernel technology eBPF

Let’s unpack that from the official website marketing tag line.

Cilium is a container network interface for Kubernetes and other container platforms (apparently there are others still out there!), which provides the cluster networking functionality. It goes one step further than other CNIs commonly used, by using a Linux Kernel software technology called eBPF, and allows for the insertion of security, visibility, and networking control logic into the Linux kernel of your container nodes.

Below is a high-level overview of the features.

TKG Cilium - Features

And a high-level architecture overview.

Cilium Architecture

Is it supported to run Cilium in Tanzu Kubernetes cluster?

Tanzu Kubernetes Grid allows you to bring your own Kubernetes CNI to the cluster as part of the Cluster bring-up. You will be required to take extra steps to build a cluster during this type of deployment, as described below in this blog post.

As for support for a CNI outside of Calico and Antrea, you as the customer/consumer must provide that. If you are using Cilium for example, then you can gain enterprise level support for the CNI, from the likes of Isovalent.

Recording

How to deploy a Tanzu Kubernete Cluster with Cilium

Before we get started, we need to download the Cilium CLI tool, which is used to install Cilium into our cluster.

The below command downloads and installs the latest stable version to your /usr/local/bin location. You can find more options here. Continue reading How to Deploy a Tanzu Kubernetes Grid cluster using the Cilium CNI

vRA SaltStack Config Header

vRSLCM – SaltStack Config upgrade fails – LCMUPGRADEVSSC10103

The Issue

When upgrading to vRA SaltStack Config 8.9 using vRealize Suite LifeCycle Manager, I found I hit an issue stating that the upgrade failed as the VAMI version of the appliance was already at the build number to be expected.

Below is a copy of the error message:

LCMUPGRADEVSSC10103

Error Code: LCMUPGRADEVSSC10103
VAMI upgrade for vRealize Automation SaltStack Config failed. Check vRealize Suite Lifecycle Manager logs for more information.
VAMI is already at the version provided for upgrade. Retry the request by passing skipTask as 'true' to skip the VAMI upgrade and proceed further to RAAS upgrade. Check upgrade logs at /var/log/lcm-vami-upgrade.log on the vRealize Automation SaltStack Config host for more details.

com.vmware.vrealize.lcm.vsse.common.exception.VsscUpgardeException: VAMI is already at the version provided for upgrade. Retry the request by passing skipTask as 'true' to skip the VAMI upgrade and proceed further to RAAS upgrade. Check upgrade logs at /var/log/lcm-vami-upgrade.log on the vRealize Automation SaltStack Config host for more details.	at com.vmware.vrealize.lcm.vsse.core.task.VsscVamiUpgradeTask.execute(VsscVamiUpgradeTask.java:96)	at com.vmware.vrealize.lcm
The Fix

Rather than follow the error message, and retry the task by skipping the failure. I instead performed a inventory sync on the environment this part of. Then retried the task without skipping the failure.

This proved successful, leading me to think that maybe vRSLCM missed a collectiong point of information during the upgrade.

  • Go to your environment with SaltStack Config installed
  • Click the options to trigger the inventory sync

vRSLCM - Trigger Inventory Sync

Keep an eye on the requests, and once the inventory sync is completed, now click on your failed upgrade request.

vRSLCM - Requests

Within the request , click to retry.

vRSLCM - Request Details - Retry

And after that you should hopefully see a successfully completed request.

vRSLCM - Request Details - Completed

Regards

Dean Lewis

Tanzu Observability vRealize Operations Cloud Header

Tanzu Observability – Configuring vRealize Operations Cloud Integration

In this blog post, I am going to cover the configuration and consumption of the Tanzu Observability integration with vRealize Operations Cloud.

  • As this is blog post is released during VMware Explore, under the announcement of the VMware Aria brand for Cloud Management tooling, these products will become
    • vRealize Operations Cloud > VMware Aria Operations
    • Tanzu Observability > VMware Aria Operations for Applications
Recording

Below is a recording I put together, covering the same content as this blog post in 10 minutes or less.

Create a Cloud Services Portal API Token

The official documentation for this integration can be found here.

First, we need to create an API token that provides the following access:

  • Organisation Member
  • vRealize Operations Cloud > vROPs ReadOnly

Go to My Account in the CSP by clicking on your name in the top right-hand corner, then My Account. Select the API Tokens tab and generate an API token.

Save the API Token to a safe space for use in the next step. Continue reading Tanzu Observability – Configuring vRealize Operations Cloud Integration

vRealize Operations ElasticStack Header

Sending vRealize Operations Alerts to ElasticStack (ELK)

This blog post is thanks to an internal query, that I thought should be easy enough to complete, however my usage of an ELK environment is limited, so it was a good chance to dig in and learn something new.

In this blog post, I’m going to detail the configurations for pushing vRealize Operations Alert notifications to ElasticStack (aka ElasticSearch, ELK) using the Notification Webhook feature.

Again, I am not an ELK expert here, so there may (read this as probably) better ways to configure this when it comes to the date handling.

Configure an ingestion timestamp in ELK

One of the first issues I hit when testing all of this, is the fact that ELK doesn’t seem to like the date formats that vROPs alerts uses. Once an index (store of data records) is created, the fields are parsed, and the type attributed to a field cannot be changed. I went through various options to remedy this, so that my logs could be searched based on time stamps, but it seemed not easily feasible. If anyone knows of the best way to achieve this, let me know, see the end of this blog post for more details.

For those of you who do know ElasticSearch, vROPs sends the time/date in the notification payload in the following format "EEE LLL dd HH:mm:ss z uuuu"

The best way I found around this, is to add in the ability to create a ingestion timestamp on the data received by Elasticsearch, and add it to the settings of the created index.

To create this ingestion rule, in your Elastic UI, click on Three Lines to open the navigation options, then click on Dev Tools, under Management.

vROPS ELK - Elastic - Management - Dev Tools

This will give you an in-browser console access to send configurations to the Elastic environment. When reading the documentation, you’ll notice that the configuration for Elastic is provided a lot of the time via API commands and payloads. It seems like this is the preferred way to configure the system, with the UI lacking the ability to make these changes for most options.

Paste the content below the screenshot, which creates a pipeline rule to provide processing on the data that comes into the system.

When the syntax is validated, you will see a small Green Arrow appear to apply the configuration. The right-hand side console window shows the output from running the API call and payload.

vROPS ELK - Elastic - Create pipeline - ingestion timestamp

PUT _ingest/pipeline/set-timestamp
{
  "description": "sets the timestamp",
  "processors": [
    {
      "set": {
        "field": "timestamp",
        "value": "{{{_ingest.timestamp}}}"
      }
    }
  ]
}
Create the outbound webhook in vRealize Operations

Continue reading Sending vRealize Operations Alerts to ElasticStack (ELK)