Tag Archives: vRA

vRealize Automation vRealize Orchestrator Dynamic Types Header

How to create vRO Dynamic Types for vRA Custom Resources

This follow on blog post, diving into how we created the vRA integration with DMS comes from Katherine Skilling, who kindly offered to guest spot and provide the additional content regarding the work we have done internally. You can find her details at the end of this blog post.

In an earlier blog post Dean covered the use of vRA (vRealize Automation) Custom Resources in the context of using vRA to create Databases in DMS (Data Management for VMware Tanzu) and how to create custom day 2 actions. In this post, we will look at how we created the Dynamic Types in vRO (vRealize Orchestrator) to facilitate the creation of the custom resources in vRA.

Introduction – What are Dynamic Types?

Dynamic Types are custom objects in vRO created to extend the schema so that you can create and manage 3rd party objects within vRO. Each type has a definition that contains the object’s properties as well as its relationship within the overall namespace which is the top level in the Dynamic Types hierarchy.

As we started working on our use case, we looked at a tool (published on VMware Code) that would generate Dynamic Types based on an API Swagger specification. The problem we encountered was the tool was quite complex and our API Swagger for Data Management for VMware Tanzu (DMS) didn’t seem to quite fit with the expected format.

This meant we ended up with lots of orphaned entries after running the tool and hoping it would do all the heavy lifting for us. After spending some time investigating and troubleshooting it become clear we didn’t understand Dynamic Types, and how they are created sufficiently well enough, to be able to resolve all our issues. Instead, we decided to scale back on our plans and focus on just the database object we really needed initially. We could use it as a learning exercise, and then revisit the generator tool later once we had a more solid foundation.

To get a better understanding of how Dynamic Types work I recommend this blog from Mike Bombard. He walks through a theoretical example using a zoo and animals to show you how objects are related, as well as how to create the required workflows. I like this particular blog as you don’t need to consider how you are going to get values from a 3rd party system, so its easy to follow along and see the places where you would be making an external connection to retrieve data. It also helped me to understand the relationships between objects without getting mixed up in the properties provided by technical objects.

After reading Mike’s post I realised that we only had a single object for our use case, a database within DMS. We didn’t have any other objects related to it, it didn’t have a parent object and it didn’t have any children. So, when we created a Dynamic Type we would need to generate a placeholder object to act as the parent for the database. I choose to name this databaseFolder just for simplicity and because I’m a visual person and like to organise things inside folders. These databaseFolders would not exist in DMS, they are just an object I created within vRO, they have no real purpose or properties to them other than that the DMS databases are their children in the Dynamic Types inventory.

Stub Workflows

When you define a new Dynamic Type, you must create or associate four workflows to it, which are known as stubs:

  1. Find By Id
  2. Find All
  3. Has Children in Relation
  4. Find Relation

These workflows tell vRO how it can find the Dynamic Type and what its position is in the hierarchy in relation to other types. You can create one set of workflows to share across all Types or you can create a set of workflows per Type. For our use case we only needed one set of the workflows, so we created our code such that the workflows would be dedicated to just the database and databaseFolder objects.

It’s important to know that vRO will run these workflows automatically when administrators browse the vRO inventory, or when using Custom Resources within vRA. They are not started manually by administrators, if you do test them by running them manually you may struggle to populate the input values correctly.

I’ll give you a bit of background to the different workflows next.

Find By Id Workflow

This workflow is automatically run whenever vRO needs to locate a particular instance of a Dynamic Type, such as when used with Custom Resources in vRA for self-service provisioning.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate the object using its id value, in our case, this is a REST API call to DMS to retrieve a single database.
  4. Perform any activities required to create the object and set its properties, in our case, this is extracting the database details from the REST API call results as DMS returns values such as the id and the name in a JSON object formatted as a string.

Find All Workflow

This workflow is automatically run whenever vRO needs to locate all instances of a Dynamic Type, such as when the Dynamic Types namespace is browsed in the vRO client when it is called as a sub workflow of the Find Relation workflow. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database).
  2. If it is a databaseFolder creates a new Dynamic Type for a databaseFolder.
  3. If it is a database perform the activity required to locate all instances of the objects, in our case, this is a REST API call to DMS to retrieve all databases.
  4. Perform any activities required to loop through each of the instances found. For each instance create an object and set its properties. In our case, this is extracting all of the database details from the REST API call results, looping through each one, and extracting values such as the id and the name.

Has Children in Relation Workflow

This workflow is used by vRO to determine whether it should expand the hierarchy when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders.  The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder call the Find Relation workflow to retrieve all related objects.
  3. If it is any other object type set the result to false to indicate that there are no child objects related to the selected object to display in the hierarchy.

Find Relation Workflow

This workflow is used by vRO when an object is selected in the Dynamic Types namespace within the vRO client. If an object has children objects these would be displayed underneath it in the namespace, in the same way as the databases are displayed under the databaseFolders. vRO automatically runs this workflow each time the Dynamic Types namespace is browsed by an administrator to find any related objects it needs to display. The workflow follows these high-level steps:

  1. Check if the object being processed is the parent object (databaseFolder) or the child object (database) by checking its parentType and relationName values which are provided as workflow inputs.
  2. If it is a databaseFolder and the relationName value is “namespace-children” which is a special value assigned to the very top level in the selected namespace, then create a new Dynamic Type for a databaseFolder.
  3. If it is a database set the type to DMS.database and then call the Find All workflow to retrieve the Dynamic Type objects for all database instances

Creating a Dynamic Type

Defining a Namespace

The first stage in creating our Dynamic Type is to define a new Namespace. Continue reading How to create vRO Dynamic Types for vRA Custom Resources

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

vRA 8.0 header

Passing JSON into vRA Code Stream CI Task – MalformedJsonException

The Issue

Whilst working with a vRA Code Stream CI Task, I needed to build a YAML file in my container, which I would use to provide the values to my CLI Tool I was running. Within this YAML File, there is a section of JSON input (yep I know, it’s a Red Hat thing!!!).

I wanted to pass in this JSON section as a vRA variable, as it contains my authentication details to the Red Hat Cloud Website.

So my vRA variable would be as below:

{"auths":{"cloud.openshift.com":{"auth":"token-key","email":"codestream@veducate.co.uk"},"registry.connect.redhat.com":{"auth":"token-key","email":"codestream@veducate.co.uk"},"registry.redhat.io":{"auth":"token-key","email":"codestream@veducate.co.uk"}}}

So my CI Task looked something like this:

cat << EOF > install-config.yaml
apiVersion: v1
baseDomain: simon.local
compute: 
- hyperthreading: Enabled 
  name: worker
  replicas: 1
  platform:
    vsphere: 
      cpus: 4
      coresPerSocket: 1
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
PullSecret: '${var.pullSecret}'
EOF

When running the Pipeline, I kept hitting an issue where the task would fail with a message similar to the below.

com.google.gson.stream.MalformedJsonException: Unterminated array at line 1 column 895 path $[39]
The Cause

This, I believe is because the tasks are passed to the Docker Host running the container via the Docker API using JSON format. The payload then contains my outer wrapping of YAML and within that more JSON. So the system gets confused with the various bits of JSON.

The Fix

To get around this issue, I encoded my JSON data in Base64. Saved this Base64 code to the variable. Then in my CI task I added an additional line before creating the file which creates a environment variable which decodes my base64 provided from a vRA variable.

Below is my new CI Task code.

export pullSecret=$(echo ${var.pullSecret} | base64 -d)

cat << EOF > install-config.yaml
apiVersion: v1
baseDomain: simon.local
compute: 
- hyperthreading: Enabled 
  name: worker
  replicas: 1
  platform:
    vsphere: 
      cpus: 4
      coresPerSocket: 1
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
PullSecret: '$pullSecret'
EOF

 

Regards

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control