Category Archives: VMware

Tanzu Blog Logo Header

Tanzu Kubernetes Grid – How to edit Node resources and Scale a Cluster Vertically With kubectl

In this blog post I am going to walk you through how to edit the Machine Resource configurations for nodes deployed by Tanzu Kubernetes Grid.

Example Issue – Disk Pressure

In my environment, I found I needed to alter my node resources, as several Pods were getting the evicted status in my cluster.

By running a describe on the pod, I could see the failure message was due to the node condition DiskPressure.

  • If you need to clean up a high number of pods across namespaces in your environment, see this blog post.
kubectl describe pod {name}

TKG - kubectl describe pod - failed - evicted - pod the node had condition disk pressure

I then looked at the node that the pod was scheduled too. (You can see this in the above screenshot, 4th line “node”).

Below we can see that on the node, Kubelet has tainted the node to stop further pods from being scheduled to this node.

In the events we see the message “Attempting to reclaim ephemeral-storage”

TKG - kubectl describle node - disk pressure

Configuring resources for Tanzu Kubernetes Grid nodes

First you will need to log into your Tanzu Kubernetes Grid Management Cluster, that was used to deploy the Workload (Guest) cluster. As this controls cluster deployments and holds the necessary bootstrap and machine creation configuration.

Once logged in, locate the existing VsphereMachineTemplate for your chosen cluster. Each cluster will have two configurations (one for Control Plane nodes, one for Compute plane/worker nodes).

If you have deployed TKG into a public cloud, then you can use the following types instead, and continue to follow this article as the theory is the same regardless of where you have deployed to:

  • AWSMachineTemplate on Amazon EC2
  • AzureMachineTemplate on Azure
kubectl get VsphereMachineTemplate

TKG - kubectl get VsphereMachineTemplate

You can attempt to directly alter this file, however, when trying to save the edited file, you will be presented with the following error message:

kubectl edit VsphereMachineTemplate tkg-wld-01-worker

error: vspheremachinetemplates.infrastructure.cluster.x-k8s.io "tkg-wld-01-worker" could not be patched: admission webhook "validation.vspheremachinetemplate.infrastructure.x-k8s.io" denied the request: spec: Forbidden: VSphereMachineTemplateSpec is immutable

TKG - kubectl edit VsphereMachineTemplate - Forbidden- VSphereMachineTemplateSpec is immutable

Instead, you must output the configuration to a local file and edit it. Also, you will need to remove the following fields if you are using my below method. Continue reading Tanzu Kubernetes Grid – How to edit Node resources and Scale a Cluster Vertically With kubectl

vRLI Header

Using vRealize Log Insight Cloud to archive on-premise Log Insight Data

vRealize Log Insight 8.6 brings the ability to build a hybrid log management platform, utilizing the functionality of an on-premises deployment of vRLI and vRLI Cloud.

From the release notes, in this blog post we’ll be looking at how to configure the following:

  • Simplify Log Archival with Non-Indexed Partitions: Use vRealize Log Insight Cloud to archive logs to meet your long-term retention requirements. vRealize Log Insight Cloud provides a no-limit logging solution at a low cost and eliminates any storage management overheads of the past. This enables easy accessibility to archived logs through on-demand queries.

For this, you will need access to a vRealize Log Insight Cloud Instance, with a cloud proxy deployed to your environment that can be accessed by the on-premises vRealize Log Insight platform.

The expectation is that you would forward you vRealize Log Insight on-premises logs to the vRealize Log Insight Cloud instance storing them only in a Non-Indexed Partition (discussed below). As your on-premises deployment act as your easy to analyse near time (within 30 days) copy of your logs. 

In this blog post I also explore the configuration and use of Index Partitions which essentially offers that near time usability and analysing of logs as well.

The high-level steps for the configuration discussed in this blog post are:

  • Send infrastructure or application logs to your on-premises vRealize Log Insight deployment
  • Setup the cloud proxy (if not already done)
  • Setup log forwarding from the on-premises Log Insight instance
  • In vRealize Log Insight Cloud, configure Non-Index Partition to receive the forwarded logs
What are Log Partitions?

Log Partitions are a feature that allows you to ingest logs based on user-defined filters. This feature is available as a paid subscription (or Trial).

There are two types of log Log Partitions:

  • Indexed Partitions
    • Stores logs for up to 30 days
    • Billed only for volume of logs ingested into the partition
    • Search and analyse logs in this partition without additional costs
  • Non-Indexed Partitions
    • Stores logs for up to 7 years
    • Billed for the volume of logs ingested into the partition, and for searching the logs.
    • If you need to query logs frequently, you can move logs to a recall partition for 30 days.
      • No additional cost for searching and analysing logs in the recall partition

Logs that do not match a query criteria in any of the configured partitions, will be stored in the Default Indexed Partition. This is read only and stores logs for 30 days.

Note:  

- Alerts and dashboard widgets are not operational in non-indexed partitions.
- Log partitions store logs ingested in the last 24 hours only.
- You can create a maximum of 10 log partitions in an organization.
Video Walk-through

Example Logs

In my Log Insight environment, I have setup the FluentD configuration to forward the Tanzu Kubernetes Grid logs from two clusters to vRealize Log Insight (on-premises deployment).

You can find the configuration settings for this within vRealize Log Insight, under the Sources Tab > Containers > Tanzu Kubernetes Grid.

vRLI Log Archive - Configure Fluentd for Tanzu Kubernetes Grid

vRLI Log Archive - Tanzu Kubernetes Grid Logs

Setup the Cloud Proxy

Continue reading Using vRealize Log Insight Cloud to archive on-premise Log Insight Data

vRealize Automation vRealize Orchestrator Dynamic Types Header

How to create vRO Dynamic Types for vRA Custom Resources

This follow on blog post, diving into how we created the vRA integration with DMS comes from Katherine Skilling, who kindly offered to guest spot and provide the additional content regarding the work we have done internally. You can find her details at the end of this blog post.

In an earlier blog post Dean covered the use of vRA (vRealize Automation) Custom Resources in the context of using vRA to create Databases in DMS (Data Management for VMware Tanzu) and how to create custom day 2 actions. In this post, we will look at how we created the Dynamic Types in vRO (vRealize Orchestrator) to facilitate the creation of the custom resources in vRA.

Introduction – What are Dynamic Types?

Dynamic Types are custom objects in vRO created to extend the schema so that you can create and manage 3rd party objects within vRO. Each type has a definition that contains the object’s properties as well as its relationship within the overall namespace which is the top level in the Dynamic Types hierarchy.

As we started working on our use case, we looked at a tool (published on VMware Code) that would generate Dynamic Types based on an API Swagger specification. The problem we encountered was the tool was quite complex and our API Swagger for Data Management for VMware Tanzu (DMS) didn’t seem to quite fit with the expected format.

This meant we ended up with lots of orphaned entries after running the tool and hoping it would do all the heavy lifting for us. After spending some time investigating and troubleshooting it become clear we didn’t understand Dynamic Types, and how they are created sufficiently well enough, to be able to resolve all our issues. Instead, we decided to scale back on our plans and focus on just the database object we really needed initially. We could use it as a learning exercise, and then revisit the generator tool later once we had a more solid foundation.

To get a better understanding of how Dynamic Types work I recommend this blog from Mike Bombard. He walks through a theoretical example using a zoo and animals to show you how objects are related, as well as how to create the required workflows. I like this particular blog as you don’t need to consider how you are going to get values from a 3rd party system, so its easy to follow along and see the places where you would be making an external connection to retrieve data. It also helped me to understand the relationships between objects without getting mixed up in the properties provided by technical objects.

After reading Mike’s post I realised that we only had a single object for our use case, a database within DMS. We didn’t have any other objects related to it, it didn’t have a parent object and it didn’t have any children. So, when we created a Dynamic Type we would need to generate a placeholder object to act as the parent for the database. I choose to name this databaseFolder just for simplicity and because I’m a visual person and like to organise things inside folders. These databaseFolders would not exist in DMS, they are just an object I created within vRO, they have no real purpose or properties to them other than that the DMS databases are their children in the Dynamic Types inventory.

Stub Workflows

When you define a new Dynamic Type, you must create or associate four workflows to it, which are known as stubs:

  1. Find By Id
  2. Find All
  3. Has Children in Relation
  4. Find Relation

These workflows tell vRO how it can find the Dynamic Type and what its position is in the hierarchy in relation to other types. You can create one set of workflows to share across all Types or you can create a set of workflows per Type. For our use case we only needed one set of the workflows, so we created our code such that the workflows would be dedicated to just the database and databaseFolder objects.

It’s important to know that vRO will run these workflows automatically when administrators browse the vRO inventory, or when using Custom Resources within vRA. They are not started manually by administrators, if you do test them by running them manually you may struggle to populate the input values correctly.

I’ll give you a bit of background to the different workflows next.

Find By Id Workflow

This workflow is automatically run whenever vRO needs to locate a particular instance of a Dynamic Type, such as when used with Custom Resources in vRA for self-service provisioning.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate the object using its id value, in our case, this is a REST API call to DMS to retrieve a single database.
  4. Perform any activities required to create the object and set its properties, in our case, this is extracting the database details from the REST API call results as DMS returns values such as the id and the name in a JSON object formatted as a string.

Find All Workflow

This workflow is automatically run whenever vRO needs to locate all instances of a Dynamic Type, such as when the Dynamic Types namespace is browsed in the vRO client when it is called as a sub workflow of the Find Relation workflow. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate all instances of the objects, in our case, this is a REST API call to DMS to retrieve all databases.
  4. Perform any activities required to loop through each of the instances found. For each instance create an object and set its properties. In our case, this is extracting all of the database details from the REST API call results, looping through each one, and extracting values such as the id and the name.

Has Children in Relation Workflow

This workflow is used by vRO to determine whether it should expand the hierarchy when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder call the Find Relation workflow to retrieve all related objects.
  3. If it is any other object type set the result to false to indicate that there are no child objects related to the selected object to display in the hierarchy.

Find Relation Workflow

This workflow is used by vRO when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders. vRO automatically runs this workflow each time the Dynamic Types namespace is browsed by an administrator to find any related objects it needs to display. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder and the relationName value is “namespace-children” which is a special value assigned to the very top level in the selected namespace, then create a new Dynamic Type for a databaseFolder.
  3. If it is a database set the type to DMS.database and then call the Find All workflow to retrieve the Dynamic Type objects for all database instances

Creating a Dynamic Type

Defining a Namespace

The first stage in creating our Dynamic Type is to define a new Namespace. Continue reading How to create vRO Dynamic Types for vRA Custom Resources

VMware Tanzu Header

vSphere with Tanzu – Creating cluster fails with “storage class is not valid”

The Issue

When you have attached a vSphere Storage Policy to your vSphere Namespace, and tried to create a cluster using the Storage Policy Name, you find it will fail with an error such as:

Error from server (storage class is not valid for control plane VM: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl', 

storage class is not valid for worker VMs: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl', 

storage class Tanzu Storage Policy under spec.settings.storage.defaultClass is not valid: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl'): error when creating "cluster.yaml": 

admission webhook "default.validating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: storage class is not valid for control plane VM: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl', 

storage class is not valid for worker VMs: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl', 

storage class Tanzu Storage Policy under spec.settings.storage.defaultClass is not valid: StorageClass 'Tanzu Storage Policy' is not assigned for namespace 'deanl'

When you look at the vSphere namespace, the Storage Policy is attached.

vSphere Namespace Storage Policy

And example of the erroneous Tanzu Cluster definition YAML:

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: deanl-cluster
  namespace: deanl
spec:
  distribution:
    version: v1.18.5
  topology:
    controlPlane:
      class: best-effort-small
      count: 1
      storageClass: "Tanzu Storage Policy"
    workers:
      class: best-effort-small
      count: 1
      storageClass: "Tanzu Storage Policy"
  settings:
    network:
      cni:
        name: calico
    storage:
      defaultClass: "Tanzu Storage Policy"
The Cause

Continue reading vSphere with Tanzu – Creating cluster fails with “storage class is not valid”

DMS - vRA Header

Data Management for VMware Tanzu with vRealize Automation as Custom Resources

In this blog post, we will cover the technical configuration to import the packages that myself and Katherine Skilling (Twitter, LinkedIn, Blog) have created.

This work is to show the possibility of creating custom workflows to integrate other products that are not natively supported within vRA, by exploiting Dynamic Types. A further write-up will detail the technical configurations of how this integration was created.

You can read this blog post on how to create Dynamic Types in vRealize Orchestrator to be used as custom resources in vRealize Automation:

Updated Feb 2022 - Includes edits needed to enable compatability with Data Management for VMware Tanzu v1.1
High-Level Overview

This blog post focuses on integrating “Data Management for VMware Tanzu”, you can read more here about this product:

These packages offer the following capabilities:

  • vRA Cloud Assembly Custom Resource for Data Management with VMware Tanzu
    • Create a database instance
    • Delete a database instance (clean up when a deployment is deleted)
    • Day 2 actions for database instance
      • Scale database instance resources
      • Point in Time Backup of database instance
      • Power-On database instance
      • Power-Off database instance
Pre-Requisites
  • Data Management for VMware Tanzu platform deployed and configured
    • Agent appliance deployed and environment configured.
    • Organisation configured with Org Admin user account.
  • vRealize Automation deployed and configured
    • Using embedded vRO will be fine
    • vRA needs to be able to connect to the DMS system over HTTPs, so appropriate routes and firewalls configured.
  • Grab the files from this location
Recording

Below is a 25 minute recording showing you how to implement the documented steps that follow in this blog post.

Importing & Configuring the vRealize Orchestrator packages

From the downloaded files under the folder “vRealize Orchestrator” there is two files:

  • com.vmware.dms.backup.package
  • dms-dynamictypes-config.package

Open the vRealize Orchestrator UI (https://{vro-url}/orchestration-ui)

  • Left-hand navigation pane > Assets > Packages > Import

DMS - vRO import package

  • Select the file name “com.vmware.dms.backup.package”
  • Select to trust the package and click import

Continue reading Data Management for VMware Tanzu with vRealize Automation as Custom Resources