Category Archives: VMware

vRealize Operations Header

How to build a vROPs dashboard for tracking Total VMs deployed and Growth Trend

In this Blog post I am going to detail how I created a vROPs dashboard based on a customer’s request.

Can we track how many VMs have been created in the past week and track if the number increases or decreases in each cluster?

If you want to just get the dashboard, see directly below, if you want to learn how it was created, keep reading further.

Installing Dashboard
  1. Download the files from code.vmware.com sample page.
  2. Import the files appended with “view” under the view’s in vROPs
  3. Import the file appended with “Dashboard” under the dashboard section in vROPs.
Dashboard Breakdown
  • First Item – This is a list which I’ve created to show each cluster, the total VM metric with some expressions attached, the timescale here is fixed by the list view and not affected by the dashboard timeframe. The change is an expression of the count of VMs at the start and end of the timeframe. I’ve added in some basic colouring to alert at thresholds.
    • Why does it say vCPUs? When using expressions, it requires a Unit to be affixed. This doesn’t work if you’re counting something, so in our next release, this issue should be addressed. It’s purely a vanity thing.
  • Second Item – This shows the VMs attached to the cluster you select on the left-hand side, showing you how old that VM is, its uptime and current power state.
  • Third is a Sparkline – Showing an easy view of the changes in total VMs per cluster over a 7-day period (as defined by the dashboards time scale)
  • Forth item is a trend graph, where we are showing date of the changes in the Total VM metric based on the data we have, and the trend/forecast. This trend into the future is set within the item itself. Currently we can set this to show the forecast for the next 366 days in the future.

vROPS - Total VMs Deployed and Growth Trend

vROPs versions

To show the VM creation date, this metric is available in vROPs 8.2 and later. This dashboard/view should work with older versions of vROPs but omit the data for the missing metric.

How was the dashboard created?

First, we need to create three views. Continue reading How to build a vROPs dashboard for tracking Total VMs deployed and Growth Trend

VMware Tanzu Header

VMware Tanzu Mission Control – Using the Data Protection feature for backups and restores

In this blog post we will cover the following topics

- Data Protection Overview
- Create a AWS Data Protection Credential
- Enable Data Protection on a Cluster
- Running a backup manually or via an automatic schedule
- Restoring your data

The follow up blog posts are;

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application
TMC Data Protection Overview

Tanzu Mission Control implements data protection through the inclusion of the Project Velero,  this tool is not enabled by default. This blog post will take you through the setup.

Data is stored externally to a AWS location, with volume backups remaining as part of the cluster where you’ve connected TMC.

Currently there is no ability to backup and restore data between Kubernetes clusters managed by TMC.

Create a AWS Data Protection Credential

First we need to create a AWS data protection credential, so that TMC can configure Velero within your cluster to save the data externally to AWS.

If you are looking for supported options for protecting data to other locations, I recommend you either look at deploying Project Velero manually outside of TMC (losing access to the data protection features in the UI) or look at another enterprise service such as Kasten.io.

  • On the Administration screen, click Accounts, and Create Account Credential.
  • Select > AWS data protection credential

TMC Data Protection Create Account Credential AWS data protection credential

  • Set your account name for easy identification and click to generate template and save this file to your machine.

TMC Data Protection Create AWS Data protection credential Credential Name Generate template

The next steps will require configuration in the AWS console to create resources using CloudFormation so that Project Velero can export data to AWS. Here is the official VMware documentation on this configuration.

TMC Data Protection Create AWS Data protection credential log into the AWS console

  • In the AWS Console, go to the CloudFormation service

TMC Data Protection AWS Console Cloud Formation

  • Click to create a new stack
  1. Click “Template is ready” as we will provide our template file from earlier.
  2. Click to upload a template file
  3. Select the file from your machine
  4. Click next

TMC Data Protection AWS Console CloudFormation Create a Stack Specify template

  • Provide a stack name and click next

TMC Data Protection AWS Console CloudFormation Create a Stack Specify stack details

  • Ignore all the items on this page and click next
  • Review your configuration and click finish.

TMC Data Protection AWS Console CloudFormation Create a Stack Configure Stack Options

  • Once you’ve reviewed and clicked create/finish. You will be taken into the Stack itself.
  • You can click the Events tab and the refresh button to see the progress.

Continue reading VMware Tanzu Mission Control – Using the Data Protection feature for backups and restores

VMware Tanzu Mission Control Red Hat OpenShift header

Enabling Tanzu Mission Control Data Protection on Red Hat OpenShift

Just a quick blog on how to get the Data Protection feature of Tanzu Mission Control on Red Hat OpenShift. By default you will find that once the data protection feature is enabled, the pods for Restic component of Velero error.

  • Enable the Data Protection Feature on your Openshift cluster

TMC Cluster Overview enable data protection

  • You will see the UI change to show it’s enabling the feature.

TMC Enabling Data Protection 2

  • You will see the Velero namespace created in your cluster.

TMC oc get projects velero vmware system tmc

However the “Data Protection is being enabled” message in the TMC UI will continue to show without user intervention. If you show the pods for the Velero namespace you will see they error.

This is because OpenShift has a higher security context out of the box for containers than a vanilla Kubernetes environment.

TMC oc get pods restic error crashloopbackoff

The steps to resolve this are the same for a native install of the Project Velero opensource install to your cluster.

  • First we need to add the velero service account to the privileged SCC.
oc adm policy add-scc-to-user privileged -z velero -n velero

TMC oc adm policy add scc to user privileged velero

  • Secondly we need to patch the DaemonSet to allow the containers for Restic run in a privileged mode.
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'

After this, if we run the command to get all pods under the Velero namespace again, we’ll see that they are replaced with the new configuration and running.

TMC oc get pods restic running

Going back to our TMC Console, we’ll see the Data Protection feature is now enabled.

TMC data protection enabled

Regards

VMware vRealize Log Insight Cloud Red Hat OpenShift header

How to configure Red Hat OpenShift to forward logs to VMware vRealize Log Insight Cloud

In this blog post we will cover how to configure Red Hat OpenShift to forward logs from the ClusterLogging instance to an external 3rd party system, in this case, VMware vRealize Log Insight Cloud.

Architecture

The Openshift Cluster Logging will have to be configured for accessing the logs and forwarding to 3rd party logging tools. You can deploy the full suite;

  • Visualization: Kibana
  • Collection: FluentD
  • Log Store: Elasticsearch
  • Curation: Curator

However, to ship the logs to an external system, you will only need to configure the FluentD service.

To forward the logs from the internal trusted services, we will use the new Log Forwarding API, which is GA in OpenShift 4.6 and later (it was a tech preview in earlier releases, and the configuration YAMLs are slightly different, so read the relevant documentation version).

This setup will provide us the architecture below. We will deploy the trusted namespace “OpenShift-Logging” and use the Operator to provide a Log Forwarding API configuration which sends the logs to a 3rd party service.

For vRealize Log Insight Cloud, we will run a standalone FluentD instance inside of the cluster to forward to the cloud service.

Openshift cluster logging to vmware log insight architecture

The log types are one of the following:

  • application. Container logs generated by user applications running in the cluster, except infrastructure container applications.
  • infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.
  • audit. Logs generated by the node audit system (auditd) and the audit logs from the Kubernetes API server and the OpenShift API server.
Prerequisites
  • VMware vRealize Log Insight Cloud instance setup with Administrator access.
  • Red Hat OpenShift Cluster deployed
    • with outbound connectivity for containers
  • Download this Github Repository for the configuration files
git clone https://github.com/saintdle/openshift_vrealize_loginsight_cloud.git
Deploy the standalone FluentD instance to forward logs to vRealize Log Insight Cloud

As per the above diagram, we’ll create a namespace and deploy a FluentD service inside the cluster, this will handle the logs forwarded from the OpenShift Logging instance and send the to the Log Insight Cloud instance.

Creating a vRealize Log Insight Cloud API Key

First, we will create an API key for sending data to our cloud instance.

  1. Expand Configuration on the left-hand navigation pane
  2. Select “API Keys”
  3. Click the “New API Key” button

vRealize Log Insight Cloud API Key

Give your API key a suitable name and click Create.

vRealize Log Insight Cloud New API Key Continue reading How to configure Red Hat OpenShift to forward logs to VMware vRealize Log Insight Cloud

vRA 8.0 header

TAM Lab 079 – Using vRA Cloud to operate a Multi-Cloud Environment

Myself and Katherine Skilling (LinkedIn, Twitter) recorded a session for TAM Lab and VMUG Events.

In the below session, we cover how to use vRealize Automation Cloud (or vRA 8.x for on-prem) to operate your Multi-Cloud environment.

So what does this actually mean?

We cover how to use vRealize Automation to deploy and consume your public cloud provider of choice. This is a demo heavy recording and we cover the following;

  • vRealize Automation Core Components
  • Image Mapping
  • Flavour Mapping
  • Machine Flavours
  • Using the Cloud Template canvas in design and code view (Blueprints)
  • Deploying your first virtual machine
  • Deploying your virtual machine to different public cloud providers
  • Creating inputs for configuration
  • Advanced configuration with CloudConfig
  • Basic Troubleshooting

Regards