Category Archives: VMware

Red Hat OpenShift + VMware Header

OpenShift 4.10 on VMware – Introducing the out-of-the-box vSphere CSI Driver installation

OpenShift Container Platform defaults to using an in-tree (non-CSI) plug-in to provision vSphere storage.

What’s New?

In OpenShift 4.9, the out-of-the-box installation of the vSphere CSI driver was tech preview. This has now moved to GA!

This means during an Installer-Provisioned-Installation cluster bring up, the vSphere CSI driver will be enabled.

This is part of the future “journey” of OpenShift to CSI drivers. As you may be aware, the original storage implementations “in-tree” drivers will be removed from future versions of Kubernetes, making way for the CSI Drivers, a better storage integration implementation.

OpenShift Storage - Journey to CSI

Therefore, the Red Hat team have been working with the upstream native vSphere CSI Driver, which is open-source and VMware Storage team, to integrating into the OpenShift installation.

The aim here is two-fold, take further advantage of the VMware platform, and to enable CSI Migration. So that is easier for customers to migrate their existing persistent data from in-tree provisioned storage constructs to CSI provisioned constructs.

How do I enable this?

Continue reading OpenShift 4.10 on VMware – Introducing the out-of-the-box vSphere CSI Driver installation

vRealize Operations Management Pack Builder Header

vRealize Operations Management Pack Builder – Building your first management pack

What is the Management Pack Builder?

Well, it’s exactly as the name suggests, a tool for building your own vRealize Operations Management Pack, to bring data into vROPs whereby there is no existing Management Pack today.

How do I get access to it?

You can sign up for the BETA here. Currently VMware is taking feedback from customers to help shape the future of this product.

You can find documentation and videos on the product on this page.

Note: VMware does not commit to delivering features discussed in this program in any generally available product.
Installing the Appliance
I’m not going to go into detail here, it’s a simple appliance that you deploy as an OVA file and provide the networking configuration as either DHCP or Static IP.
Setup the Management Pack Builder

Log into the appliance using “admin/admin” and you will be prompted to change your password.

You will need to licence the product (using the beta key).

  • Click the little person icon in the top right
  • Select “Licence”
  • Apply the licence

Next, we need to create a connection to our vRealize Operations environment. While this step is not necessary at all to get started. If you want to create relationships for data objects with existing vROPs objects, you need to configure this connection.

  • Select the “vRealize Operations Connections Tab”
  • Click “Add vRealize Operations Connection” button
  • Input your details
    • Click Test
    • Click Save

vRealize Operations Management Pack Builder - vRealize Operations Connector

Creating your first management pack

Continue reading vRealize Operations Management Pack Builder – Building your first management pack

Tanzu Observability Header

Tanzu Observability – First look at monitoring OpenShift & VMware Cloud on AWS

Recently, I was involved in some work to assist the VMware Tanzu Observability team to assist them in updating their deliverables for OpenShift. Now it’s generally available, I found some time to test it out in my lab.

For this blog post, I am going to pull in metrics from my VMware Cloud on AWS environment and the Red Hat OpenShift Cluster which is deployed upon it.

What is Tanzu Observability?

We should probably start with what is Observability, I could re-create the wheel, but instead VMware has you covered with this helpful page.

Below is the shortened table comparison.

Monitoring vs. Observability

As a developer you want to focus on developing the application, but you also do need to understand the rest of the stack to a point. In the middle, you have a Site Reliability Engineer (SRE), who covers the platform itself, and availability to ensure the app runs as best it can. And finally, we have the platform owner, where the applications and other services are located.

Somewhere in the middle, when it comes to tooling, you need to cover an example of the areas listed below:

  • Application Observability & Root Cause Analysis
    • App-aware Troubleshooting & Root Cause Analysis
  • Distributed Tracing
  • CI/CD Monitoring
  • Analytics with Query Language and high reliability, granularity, cardinality, and retention
  • Full-Stack Apps & Infra Telemetry as a Service
  • Infra Monitoring
    • Performance Optimization
    • Capacity and Cost Optimization
    • Configuration and Compliance

So now you are thinking, OK, but VMware has vRealize Operations that gives me a lot of data, so why is there a new product for this?

vRealize Operations and Tanzu Observability come together – delivering full stack monitoring and observability from both the infra-up and app-down perspective, equipping both teams in the org to meet common goals.

Monitoring & Observability

It is about the right tool for the right team and bringing together harmony between them. Which is why at VMware, the focus has been on covering the needs of team across the two products.

vRealize Operations is going to give you SLA metrics for your infrastructure and even application awareness. However Tanzu Observability brings more application focused features to allow you as a business, report on Application Experience of your end users/customers, at an SLA/SLO/KPI approach with extensibility to provide an Experience Level Agreement (XLA) type capability.

VMware Tanzu Observability by Wavefront delivers enterprise-grade observability and analytics at scale. Monitor everything from full-stack applications to cloud infrastructures with metrics, traces, event logs, and analytics.

High level features include:

To follow this blog, you can also easily get yourself access to Tanzu Observability.

Configuring data ingestion into Tanzu Observability using the native integrations

Configuring the OpenShift (Kubernetes) Integration using Helm

First, we need to create an API Key that we can use to connect our locally deployed wavefront services to the SaaS service to send data. Continue reading Tanzu Observability – First look at monitoring OpenShift & VMware Cloud on AWS

vRealize Automation - VMware Tanzu Header

Deploying vSphere with Tanzu Clusters using vRA and Cluster Plans

In this blog post I am covering the vRealize Automation native feature that allows you to deploy Tanzu clusters via the Tanzu Kubernetes Grid Service of vCenter.

If you have been following my posts in 2021, I wrote a blog and presented as part of VMworld on how to deploy Tanzu Clusters using vRA Code Stream, due to the lack of native integration.

Now you have either option!

Pre-requisites
  • A working vSphere with Tanzu setup
  • Create a Supervisor Namespace that we can deploy clusters into
    • vRA requires an existing Supervisor namespace to deploy clusters into, despite the separate capability that vRA can create Supervisor namespaces via a Cloud Template
    • This namespace needs a VM Class and Storage Policy to be attached.
Configuring the vRealize Automation Infrastructure settings
  • Create a Cloud Account for your vCenter
    • Ensure that once the data collection has run, the account shows “Available for Kubernetes deployment”

vRA - Cloud Account - vCenter - Available for Kubernetes deployment

  • Create a new Kubernetes Zone
    • Select your Cloud Account linked vCenter
    • Provide a name
  • Select the Provisioning tab

vRA - New Kubernetes Zone

  • Click to add compute to the zone.
    • For the Tanzu Cluster deployment, this needs to be into existing Supervisor namespaces (as in the pre-reqs).
    • Add your existing Supervisor namespaces you are interested in using

You can add the Supervisor cluster itself, but it won’t be used in this feature walk-through. If you have multiple Supervisor namespaces, I recommend tagging them in this view. So that you can use it as a constraint tag in the Cloud Template.

vRA - New Kubernetes Zone - Provisioning

  • Click Projects, select your chosen project
  • Select the Kubernetes Provisioning tab
  • Add your Kubernetes Zone

vRA - Projects - Kubernetes Provisioning

  • Click Cluster Plans under Configure heading
  • Create a new Cluster Plan with your specification
    • Select the vCenter Account it will apply to
    • Provide a name (a-z,A-Z,0-9,-)
      • The UI will allow you to input characters that are not supported on the Cloud Template for name property
    • Select your Kubernetes version to deploy
    • Number of Nodes for Control and Worker nodes
    • The Machine Class (VM Class on the Supervisor Namespace) for each Node Type
      • You will be able to select from the VM classes added at the Supervisor namespace in vCenter
    • Select the Storage Class for each Node Type
    • Select the default PVC storage class in the cluster
    • Enable/disable including all Supervisor Namespace storage classes
    • Choose either default networking deployment for clusters or provide your own specification.

vRA - Cluster Plans

Regarding the network settings, below in the image I have highlighted how the Tanzu Kubernetes Grid Service v1alpha1 API YAML format for a cluster creation request maps across to the settings expected by vRA.

You can find further examples here.

vRA - Cluster Plans - Network Settings

  • Create a Cloud Template
  • Place the “K8s Cluster” resource object on your canvas
  • Configure the properties as needed
    • The workers property will override the workers number in the Cluster Plan

Below is the example I used.

formatVersion: 1
inputs:
  cluster_name:
    type: string
    title: Cluster_name
    default: vra-test
  workers:
    type: integer
    title: No. of Workers
    default: 1
resources:
  Cloud_Tanzu_Cluster_1:
    type: Cloud.Tanzu.Cluster
    properties:
      name: '${input.cluster_name}'
      plan: small-v120
      workers: '${input.workers}'

Once you are happy, deploy the Cloud Template.

vRA - Cloud Template - type cloud.tanzu.cluster

Successful Deployment of a Tanzu Cluster

In the below screenshots, you can see the completed deployment.

  • Clicking on the Resource Object, you have the ability to download a Kubeconfig file to access the cluster.

vRA - Deployment completed - Resource Object details

  • Viewing the History Tab will show you details about the creation.

vRA - Deployment completed

  • Clicking on Request Details Tab will show you the user inputs take at the time of deployment.

vRA - Deployment completed - Request Details

If you look at the “Infrastructure” tab and the configuration under Kubernetes, you will see this cluster is onboarded into vRA. You can further use other cloud templates against this cluster to create Kubernetes namespaces within the cluster, for example.

vRA - Infrastructure - Kubernetes - Cluster

Finally, within my vCenter you can see my deployed cluster, to the Supervisor Namespace I specified in the Kubernetes Zone.

vRA - Deployed Tanzu cluster in vCenter Supervisor Namespace

Regards

Dean Lewis

Tanzu Blog Logo Header

Tanzu Service Mesh – First look at deploy, configure and managing an app between clusters

I’ve finally found the time to dive into Tanzu Service Mesh (TSM), a product which has intrigued me greatly. In this blog post I cover setting up TSM, and configuring application connectivity between two clusters running in different public clouds.

What is a Service Mesh?

In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between services or micro-services, using a proxy. This then provides additional capability such as transit layer between platforms, observability, traffic management and more.

A service mesh provides the capability to secure and simplify your application communications between platforms/clusters/clouds without the need for introduction of lots of other software such as API Gateways.

A service mesh introduces a proxy component which sits side by side with any deployed service. And any inbound and outbound communication flows through the proxy to/from the service. This allows the proxy to provide all the functions needed to offer the capabilities discussed above and in the rest of this blog post.

What is Tanzu Service Mesh (TSM)?

TSM builds upon the principals mentioned above, offering consumers a full enterprise-class service mesh, designed to solve challenges across distributed micro-service applications by extending service mesh services outside Kubernetes clusters and providing a unified operational layer across platforms and technologies.

  • Extends the service mesh capability (discovery, connectivity, control, security, and observability) to users and data, and across clusters and clouds.

  • Facilitates the development and management of distributed applications across multiple clusters, multiple clouds, and in hybrid-cloud environments with Global Namespaces (discussed below), supporting federation across organizational boundaries, technologies, and service meshes.

  • Implements consistent application-layer traffic management and security policies across all your clusters and clouds.

Global Namespace – is a key concept introduced by TSM, that extends service mesh capabilities for resources in a distributed application by arranging these objects in a logical group. A global namespace is therefore not tied to a single cluster or cloud, connecting resources between two or more clusters/clouds. Each global namespace manages service discovery, observability, encryption, policies, and service-level objectives (SLOs) for its objects regardless of where they reside: in multiple clusters, sites, or clouds.

To learn more about the concepts introduced by TSM, please read the below document:

Tanzu Service Mesh - Logical Overview

Encryption for Data in Flight

TSM provides the value of a transit between clusters and clouds. In doing so comes many challenges, one such is achieving end-to-end encryption for in-flight traffic. This can be mandatory to comply with regulatory and security governance measures.

Tanzu Service Mesh includes a top-level certificate authority (CA) to provide a trusted identity to each node on the network. In the case of microservices architecture, those nodes are the pods that run the services. Tanzu Service Mesh can set up end-to-end mutual transport layer security (mTLS) encryption using a CA function.

Activating Tanzu Service Mesh

TSM is a SaaS service only from VMware, meaning no on-premises deployment for the control plane aspect. To activate, you will click the link provided when you request a trial or purchase the product,and linking it to your VMware Cloud Services Organisation (or create a new org if necessary.)

You must belong to an organization before you can access Tanzu Service Mesh. 

If you are an organization owner, you have access to all the resources of the organization. You can add cloud services to your organization and then invite users. 

After you add Tanzu Service Mesh to your organization, you can add users to the organization to give them access to Tanzu Service Mesh. 

For more information about organizations and adding users to an organization, see the VMware Cloud Services documentation.

Once you’ve activated the product, you will see it in your Cloud Services Portal as below.

VMware Cloud Services - Tanzu Service Mesh

Pre-requisites for Installation

This is something which caused me issues in the past, where I tried to deploy TSM to a cluster that did not have enough resources to begin with.

- Cluster with 6 nodes (though the documentation does say minimum of 3 lower down in the text)
> I've queried this

- Requires that every node in the cluster have at least 250 milliCPU and 650 MB of memory available

- Pod Limits - Requires a quota of at least 3 pods for each node on the cluster and additionally at least 30 pods for the whole cluster.

Basically, you need to make sure you have enough minimum nodes and resources.

Regarding supported Kubernetes distributions, the team validate against a number of Enterprise Kubernetes solutions as well as the upstream Kubernetes.

I deployed two new environments for my testing, an EKS and AKS cluster, with the following commands.

eksctl create cluster --name veducate-eks-tsm --region us-west-1 --nodegroup-name standard --node-type m5.xlarge --managed

az aks create -rg aks-tsm --name aks-tsm-cluster --node-count 4 --ssh-key $HOME/.ssh/id_rsa.pub -s standard_d4s_v5
Onboarding your Clusters into Tanzu Service Mesh

Onboarding involves registering the cluster with Tanzu Service Mesh and installing the necessary software on the cluster. Continue reading Tanzu Service Mesh – First look at deploy, configure and managing an app between clusters