Tag Archives: vRA

vRealize Automation Header

vRealize Automation – Property groups deep dive

I had the pleasure of working with a customer who wanted to use property groups within vRealize Automation, to provide various configuration data to drive their deployments. They asked some queries about how to use property groups that went beyond the documentation, so I thought it would also make a good blog post.

What are property groups?

Property groups were introduced in vRealize Automation 7.x and sorely missed when the 8.x version was shipped. They were reintroduced in vRA 8.3.

When you several properties that always appear together in your Cloud Templates, you can create a property group to store them together.

This allows you to re-use the same properties over and again across Cloud Templates from a central construct, rather than replicate the same information directly into each cloud template.

The benefit of doing this, is that if you update any information, it is pushed to all linked cloud templates. Potentially this could be a disadvantage as well, so once you use these in production, be mindful of any updates to in-use groups.

There are two types of property groups. When creating a property group, you select the type. You do not have the ability to change or convert the type once the group has been created.

  • Inputs

    Input property groups gather and apply a consistent set of properties at user request time. Input property groups can include entries for the user to add or select, or they might include read-only values that are needed by the design.

    Properties for the user to edit or select can be readable or encrypted. Read-only properties appear on the request form but can’t be edited. If you want read-only values to remain totally hidden, use a constant property group instead.

  • Constants

    Constant property groups silently apply known properties. In effect, constant property groups are invisible metadata. They provide values to your Cloud Assembly designs in a way that prevents a requesting user from reading those values or even knowing that they’re present. Examples might include license keys or domain account credentials.

Getting Started with a Input Property Group

Ultimately the Input Property Group works the exact same way as Inputs you specify on the cloud template directly. The group option simply provides a way to centralise these inputs for use between cloud templates.

Create an Input Property Group
  • Click on Design Tab
  • Click Property Groups from the left-hand navigation pane
  • Select New Property Group

vRA - Cloud Assembly - Design - Property Groups - New Property Group Continue reading vRealize Automation – Property groups deep dive

vRealize Automation - VMware Tanzu Header

Deploying vSphere with Tanzu Clusters using vRA and Cluster Plans

In this blog post I am covering the vRealize Automation native feature that allows you to deploy Tanzu clusters via the Tanzu Kubernetes Grid Service of vCenter.

If you have been following my posts in 2021, I wrote a blog and presented as part of VMworld on how to deploy Tanzu Clusters using vRA Code Stream, due to the lack of native integration.

Now you have either option!

Pre-requisites
  • A working vSphere with Tanzu setup
  • Create a Supervisor Namespace that we can deploy clusters into
    • vRA requires an existing Supervisor namespace to deploy clusters into, despite the separate capability that vRA can create Supervisor namespaces via a Cloud Template
    • This namespace needs a VM Class and Storage Policy to be attached.
Configuring the vRealize Automation Infrastructure settings
  • Create a Cloud Account for your vCenter
    • Ensure that once the data collection has run, the account shows “Available for Kubernetes deployment”

vRA - Cloud Account - vCenter - Available for Kubernetes deployment

  • Create a new Kubernetes Zone
    • Select your Cloud Account linked vCenter
    • Provide a name
  • Select the Provisioning tab

vRA - New Kubernetes Zone

  • Click to add compute to the zone.
    • For the Tanzu Cluster deployment, this needs to be into existing Supervisor namespaces (as in the pre-reqs).
    • Add your existing Supervisor namespaces you are interested in using

You can add the Supervisor cluster itself, but it won’t be used in this feature walk-through. If you have multiple Supervisor namespaces, I recommend tagging them in this view. So that you can use it as a constraint tag in the Cloud Template.

vRA - New Kubernetes Zone - Provisioning

  • Click Projects, select your chosen project
  • Select the Kubernetes Provisioning tab
  • Add your Kubernetes Zone

vRA - Projects - Kubernetes Provisioning

  • Click Cluster Plans under Configure heading
  • Create a new Cluster Plan with your specification
    • Select the vCenter Account it will apply to
    • Provide a name (a-z,A-Z,0-9,-)
      • The UI will allow you to input characters that are not supported on the Cloud Template for name property
    • Select your Kubernetes version to deploy
    • Number of Nodes for Control and Worker nodes
    • The Machine Class (VM Class on the Supervisor Namespace) for each Node Type
      • You will be able to select from the VM classes added at the Supervisor namespace in vCenter
    • Select the Storage Class for each Node Type
    • Select the default PVC storage class in the cluster
    • Enable/disable including all Supervisor Namespace storage classes
    • Choose either default networking deployment for clusters or provide your own specification.

vRA - Cluster Plans

Regarding the network settings, below in the image I have highlighted how the Tanzu Kubernetes Grid Service v1alpha1 API YAML format for a cluster creation request maps across to the settings expected by vRA.

You can find further examples here.

vRA - Cluster Plans - Network Settings

  • Create a Cloud Template
  • Place the “K8s Cluster” resource object on your canvas
  • Configure the properties as needed
    • The workers property will override the workers number in the Cluster Plan

Below is the example I used.

formatVersion: 1
inputs:
  cluster_name:
    type: string
    title: Cluster_name
    default: vra-test
  workers:
    type: integer
    title: No. of Workers
    default: 1
resources:
  Cloud_Tanzu_Cluster_1:
    type: Cloud.Tanzu.Cluster
    properties:
      name: '${input.cluster_name}'
      plan: small-v120
      workers: '${input.workers}'

Once you are happy, deploy the Cloud Template.

vRA - Cloud Template - type cloud.tanzu.cluster

Successful Deployment of a Tanzu Cluster

In the below screenshots, you can see the completed deployment.

  • Clicking on the Resource Object, you have the ability to download a Kubeconfig file to access the cluster.

vRA - Deployment completed - Resource Object details

  • Viewing the History Tab will show you details about the creation.

vRA - Deployment completed

  • Clicking on Request Details Tab will show you the user inputs take at the time of deployment.

vRA - Deployment completed - Request Details

If you look at the “Infrastructure” tab and the configuration under Kubernetes, you will see this cluster is onboarded into vRA. You can further use other cloud templates against this cluster to create Kubernetes namespaces within the cluster, for example.

vRA - Infrastructure - Kubernetes - Cluster

Finally, within my vCenter you can see my deployed cluster, to the Supervisor Namespace I specified in the Kubernetes Zone.

vRA - Deployed Tanzu cluster in vCenter Supervisor Namespace

Regards

Dean Lewis

vRealize Automation Header

Using vRealize Automation Cloud Template to execute a Code Stream Pipeline

Looking at the latest vRA Cloud Template Schema, I saw something interesting in the definitions.

The ability to have a resource type of “codestream.execution”. This allows you to execute a Code Stream pipeline from within a cloud template. Once deployed, a Deployment will feature a resource object, of which you can also link a custom day 2 action to!

vRA Cloud Assembly - Deployment with codestream.execution resource object

This opens a lot of future possibilities of creative ways to extend your automation.

The schema looks like the below. And you can continue to follow this blog for an example. Continue reading Using vRealize Automation Cloud Template to execute a Code Stream Pipeline

vRealize Automation vRealize Orchestrator Dynamic Types Header

How to create vRO Dynamic Types for vRA Custom Resources

This follow on blog post, diving into how we created the vRA integration with DMS comes from Katherine Skilling, who kindly offered to guest spot and provide the additional content regarding the work we have done internally. You can find her details at the end of this blog post.

In an earlier blog post Dean covered the use of vRA (vRealize Automation) Custom Resources in the context of using vRA to create Databases in DMS (Data Management for VMware Tanzu) and how to create custom day 2 actions. In this post, we will look at how we created the Dynamic Types in vRO (vRealize Orchestrator) to facilitate the creation of the custom resources in vRA.

Introduction – What are Dynamic Types?

Dynamic Types are custom objects in vRO created to extend the schema so that you can create and manage 3rd party objects within vRO. Each type has a definition that contains the object’s properties as well as its relationship within the overall namespace which is the top level in the Dynamic Types hierarchy.

As we started working on our use case, we looked at a tool (published on VMware Code) that would generate Dynamic Types based on an API Swagger specification. The problem we encountered was the tool was quite complex and our API Swagger for Data Management for VMware Tanzu (DMS) didn’t seem to quite fit with the expected format.

This meant we ended up with lots of orphaned entries after running the tool and hoping it would do all the heavy lifting for us. After spending some time investigating and troubleshooting it become clear we didn’t understand Dynamic Types, and how they are created sufficiently well enough, to be able to resolve all our issues. Instead, we decided to scale back on our plans and focus on just the database object we really needed initially. We could use it as a learning exercise, and then revisit the generator tool later once we had a more solid foundation.

To get a better understanding of how Dynamic Types work I recommend this blog from Mike Bombard. He walks through a theoretical example using a zoo and animals to show you how objects are related, as well as how to create the required workflows. I like this particular blog as you don’t need to consider how you are going to get values from a 3rd party system, so its easy to follow along and see the places where you would be making an external connection to retrieve data. It also helped me to understand the relationships between objects without getting mixed up in the properties provided by technical objects.

After reading Mike’s post I realised that we only had a single object for our use case, a database within DMS. We didn’t have any other objects related to it, it didn’t have a parent object and it didn’t have any children. So, when we created a Dynamic Type we would need to generate a placeholder object to act as the parent for the database. I choose to name this databaseFolder just for simplicity and because I’m a visual person and like to organise things inside folders. These databaseFolders would not exist in DMS, they are just an object I created within vRO, they have no real purpose or properties to them other than that the DMS databases are their children in the Dynamic Types inventory.

Stub Workflows

When you define a new Dynamic Type, you must create or associate four workflows to it, which are known as stubs:

  1. Find By Id
  2. Find All
  3. Has Children in Relation
  4. Find Relation

These workflows tell vRO how it can find the Dynamic Type and what its position is in the hierarchy in relation to other types. You can create one set of workflows to share across all Types or you can create a set of workflows per Type. For our use case we only needed one set of the workflows, so we created our code such that the workflows would be dedicated to just the database and databaseFolder objects.

It’s important to know that vRO will run these workflows automatically when administrators browse the vRO inventory, or when using Custom Resources within vRA. They are not started manually by administrators, if you do test them by running them manually you may struggle to populate the input values correctly.

I’ll give you a bit of background to the different workflows next.

Find By Id Workflow

This workflow is automatically run whenever vRO needs to locate a particular instance of a Dynamic Type, such as when used with Custom Resources in vRA for self-service provisioning.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate the object using its id value, in our case, this is a REST API call to DMS to retrieve a single database.
  4. Perform any activities required to create the object and set its properties, in our case, this is extracting the database details from the REST API call results as DMS returns values such as the id and the name in a JSON object formatted as a string.

Find All Workflow

This workflow is automatically run whenever vRO needs to locate all instances of a Dynamic Type, such as when the Dynamic Types namespace is browsed in the vRO client when it is called as a sub workflow of the Find Relation workflow. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate all instances of the objects, in our case, this is a REST API call to DMS to retrieve all databases.
  4. Perform any activities required to loop through each of the instances found. For each instance create an object and set its properties. In our case, this is extracting all of the database details from the REST API call results, looping through each one, and extracting values such as the id and the name.

Has Children in Relation Workflow

This workflow is used by vRO to determine whether it should expand the hierarchy when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder call the Find Relation workflow to retrieve all related objects.
  3. If it is any other object type set the result to false to indicate that there are no child objects related to the selected object to display in the hierarchy.

Find Relation Workflow

This workflow is used by vRO when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders. vRO automatically runs this workflow each time the Dynamic Types namespace is browsed by an administrator to find any related objects it needs to display. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder and the relationName value is “namespace-children” which is a special value assigned to the very top level in the selected namespace, then create a new Dynamic Type for a databaseFolder.
  3. If it is a database set the type to DMS.database and then call the Find All workflow to retrieve the Dynamic Type objects for all database instances

Creating a Dynamic Type

Defining a Namespace

The first stage in creating our Dynamic Type is to define a new Namespace. Continue reading How to create vRO Dynamic Types for vRA Custom Resources

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream