Tag Archives: Deploy

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Self-Service DBaaS

In my first blog post, I covered the prerequisites and how to deploy the components for the Data Management for VMware Tanzu platform.

In this blog post we will cover using this new infrastructure for Self-Service Database-as-a-Service deployment and configuration.

Configure Database Templates and Instance Plans
  • Log into your Provider Appliance as the Provider administrator account.

The last actions to configure are publishing the database templates and configuring Instance Plans. (If you have setup your Organisation to use instance plans).

  • Click Templates from the left-hand navigation pane
  • Select your Template which has been sync’d from the Tanzu Net (Or repo as air-gap configuration).
  • Publish the Template

Data Management for Tanzu - Publish Template

  • Click Instance Plans from the left-hand navigation pane
  • Click “Add New Plan”
  • Configure as necessary

Data Management for Tanzu Instance Plans

Configure additional Org Users

Continue reading Data Management for VMware Tanzu – Self-Service DBaaS

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Getting Started

This blog post will cover deploying the infrastructure and components for Data Management for VMware Tanzu.

My second blog post will cover using this infrastructure for Self-Service Database-as-a-Service.

What is Data Management for VMware Tanzu?

Data Management for VMware Tanzu (DMS) is a newly released solution from VMware (July 2021) providing data-as-a-service toolkit for on-demand provisioning & automated management of MySQL and PostreSQL databases on vSphere platforms.

DMS is accessible as both a Graphical UI and via REST API, to meet the needs of administrators and developers and their consumption needs.

With DMS, it provides the ability to create and manage data services through a centralized platform in a self-service fashion, with the following features:

  • Simplified management for admins, acting as a Database fleet management tool; presenting a view of the organization’s database instances running on multi-cloud infrastructure.
  • Database users have the ability to consume self-service capabilities to create new database instances, or to operate on existing instances safely and securely, without requiring infrastructure or database expertise.
  • DMS also provides full automation for provisioning data service instances, backups, security patches, and periodic updates of the data service engine.

Data Management for Tanzu Provider Home Page

Data Management for Tanzu Provider Create Database Page

Understanding the components

DMS is made up of the following architectural components:

  • Provider – this is the core appliance you will deploy, which offers the central UI and API for all users to interact with the Data services and functions. It acts as the control plane to the other components.
  • Agent – These appliances are deployed to extend the control plan into the various vSphere environments, providing a point of presence for provisioning and management operations of the Services deployed.
  • Service – These are photon appliances which host the deployed instance of the data service (database). They communicate with the Agent that deployed them, via a private API. DMS supports the deployment of MySQL and PostgreSQL currently.
  • Template Repo – publishes a set of Data Management for VMware Tanzu Database Templates on Tanzu Network. The provider will poll the Tanzu Network periodically for new templates. There is also a method to handle air-gap environments.

S3 storage is required to be used for several items such as location to store the templates, database configurations and database backups.

Full deployment models for the components can be found here.

Data Management for Tanzu Architecture

Understanding Organisations and User Access

DMS implements the concept of Organisations to provide a logical grouping of users. There are two types:

  • Provider Org – A type of organization to which one or more Provider Administrator user belongs.
    • One provider org can exist in a single DMS installation.
    • This is automatically created during the deployment of the Provider Appliance
    • The Provider Org name is the company name specified at deployment.
  • Agent Org – A type of organization with one or more Organization Administrator or Organization User members.
    • These orgs are created via the DMS UI/API once the Provider appliance has been deployed and can be created at any time.

DMS pre-defines these three user roles:

  • Provider Administrator
    • This is the single Provider Role in the installation
    • Among other tasks, users in this role can import additional Provider Administrator users, create organizations, and create and import organization users
  • Organization Administrator
  • Organization User

The Provider Administrator user will assign a role to each DMS user that they create or import in an organization.

A user that is assigned the Organization Administrator role can manage all services in the organization to which they belong. A user assigned the Organization User can manage only the services that they provision.

More detailed information on the User roles and responsibilities can be found here.

Getting Started

Now first and foremost, I’ll point you towards the official documentation to use as a reference to review alongside this blog post.

Prerequisites

There are always several things to get sorted before you ever dive right in! The official requirements are detailed here, I’m going to call out some of the more finicky pieces you need to be aware of. Continue reading Data Management for VMware Tanzu – Getting Started

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

vRA and Tanzu Header

Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

This walk through will detail the technical configurations for using vRA Code Stream to deploy vSphere with Tanzu supervisor namespaces and guest clusters.

Requirement

For a recent customer proof-of-concept, we wanted to show the full automation capabilities and combine this with the consumption of vSphere with Tanzu.

The end goal was to use Cloud Assembly and Code Stream to cover several automation tasks, and then offer them as self-service capability via a catalog item for an end-user to consume.

High Level Steps

To achieve our requirements, we’ll be configuring the following:

  • Cloud Assembly
    • VCF SDDC Manager Integration
    • Kubernetes Cloud Zone – Tanzu Supervisor Cluster
    • Cloud Template to deploy a new Tanzu Supervisor Namespace
  • Code Stream
    • Tasks to provision a new Supervisor Namespace using the Cloud Assembly Template
    • Tasks to provision a new Tanzu Guest Cluster inside of the Supervisor namespace using CI Tasks and the kubectl command line tool
    • Tasks to create a service account inside of the Tanzu Guest Cluster
    • Tasks to create Kubernetes endpoint for the new Tanzu Guest Cluster in both Cloud Assembly and Code Stream
  • Service Broker
    • Catalog Item to allow End-Users to provision a brand new Tanzu Guest Cluster in its own Supervisor Namespace
Pre-Requisites

In my Lab environment I have the following deployed:

  • VMware Cloud Foundation 4.2
    • With Workload Management enabled (vSphere with Tanzu)
  • vRealize Automation 8.3
  • A Docker host to be used by Code Stream

For the various bits of code, I have placed them in my GitHub repository here.

Configuring Cloud Assembly to deploy Tanzu supervisor namespaces

This configuration is detailed in this blog post, I’ll just cover the high-level configuration below.

  • Configure an integration for SDDC manager under Infrastructure Tab > Integrations

Continue reading Walk through – Using vRA to deploy vSphere with Tanzu Namespaces & Guest Clusters

vRA AKS Tanzu Mission Control Header

Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native Azure AKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream and provide additional functions on making these AKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create an Azure AKS Cluster
    • Create AKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register AKS cluster in Tanzu Mission Control
    • Export the SSH keys for the AKS cluster to the docker host.
Pre-Requisites
Creating a Code Stream Pipeline to deploy a Azure AKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy Azure AKS Clusters and register with Tanzu Mission Control