Tanzu Blog Logo Header

Data Management for VMware Tanzu – Getting Started

This blog post will cover deploying the infrastructure and components for Data Management for VMware Tanzu.

My second blog post will cover using this infrastructure for Self-Service Database-as-a-Service.

What is Data Management for VMware Tanzu?

Data Management for VMware Tanzu (DMS) is a newly released solution from VMware (July 2021) providing data-as-a-service toolkit for on-demand provisioning & automated management of MySQL and PostreSQL databases on vSphere platforms.

DMS is accessible as both a Graphical UI and via REST API, to meet the needs of administrators and developers and their consumption needs.

With DMS, it provides the ability to create and manage data services through a centralized platform in a self-service fashion, with the following features:

  • Simplified management for admins, acting as a Database fleet management tool; presenting a view of the organization’s database instances running on multi-cloud infrastructure.
  • Database users have the ability to consume self-service capabilities to create new database instances, or to operate on existing instances safely and securely, without requiring infrastructure or database expertise.
  • DMS also provides full automation for provisioning data service instances, backups, security patches, and periodic updates of the data service engine.

Data Management for Tanzu Provider Home Page

Data Management for Tanzu Provider Create Database Page

Understanding the components

DMS is made up of the following architectural components:

  • Provider – this is the core appliance you will deploy, which offers the central UI and API for all users to interact with the Data services and functions. It acts as the control plane to the other components.
  • Agent – These appliances are deployed to extend the control plan into the various vSphere environments, providing a point of presence for provisioning and management operations of the Services deployed.
  • Service – These are photon appliances which host the deployed instance of the data service (database). They communicate with the Agent that deployed them, via a private API. DMS supports the deployment of MySQL and PostgreSQL currently.
  • Template Repo – publishes a set of Data Management for VMware Tanzu Database Templates on Tanzu Network. The provider will poll the Tanzu Network periodically for new templates. There is also a method to handle air-gap environments.

S3 storage is required to be used for several items such as location to store the templates, database configurations and database backups.

Full deployment models for the components can be found here.

Data Management for Tanzu Architecture

Understanding Organisations and User Access

DMS implements the concept of Organisations to provide a logical grouping of users. There are two types:

  • Provider Org – A type of organization to which one or more Provider Administrator user belongs.
    • One provider org can exist in a single DMS installation.
    • This is automatically created during the deployment of the Provider Appliance
    • The Provider Org name is the company name specified at deployment.
  • Agent Org – A type of organization with one or more Organization Administrator or Organization User members.
    • These orgs are created via the DMS UI/API once the Provider appliance has been deployed and can be created at any time.

DMS pre-defines these three user roles:

  • Provider Administrator
    • This is the single Provider Role in the installation
    • Among other tasks, users in this role can import additional Provider Administrator users, create organizations, and create and import organization users
  • Organization Administrator
  • Organization User

The Provider Administrator user will assign a role to each DMS user that they create or import in an organization.

A user that is assigned the Organization Administrator role can manage all services in the organization to which they belong. A user assigned the Organization User can manage only the services that they provision.

More detailed information on the User roles and responsibilities can be found here.

Getting Started

Now first and foremost, I’ll point you towards the official documentation to use as a reference to review alongside this blog post.

Prerequisites

There are always several things to get sorted before you ever dive right in! The official requirements are detailed here, I’m going to call out some of the more finicky pieces you need to be aware of. Continue reading Data Management for VMware Tanzu – Getting Started

Kasten K10 Header

Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

In this blog post I’m going to cover setting up the Multi-cluster support for Kasten when you’ve installed the software to multiple Kubernetes clusters.

One K10 cluster you have deployed will become the primary node. You will use this node and dashboard interface to access the cluster UI.

  • The primary cluster defines policies and other configuration centrally. Centrally defined policies and configuration can then be distributed to designated clusters to be enacted.

Additional clusters are then added in and are called Secondaries.

  • The secondary clusters receive policies and other configuration from the primary cluster. Once policies are distributed to a secondary, the local K10 installation enacts the policy. This ensures that the policy will continue to be enforced, even if disconnected from the primary.

Pre-Requisites:

  • Authentication
    • Token Authentication must be used
  • Network
    • Secondary K10’s ingress must be accessible by the primary
    • Secondary API Server must be accessible by the primary
  • Run the tool on a bastion host that has connectivity using kubectl to all of the clusters you want to bring together.
Download the K10MultiCluster tool
  • Set the tool as executable
  • Move the tool to your /usr/local/bin/ folder
curl -LJO https://github.com/kastenhq/external-tools/releases/download/4.0.9/k10multicluster_4.0.9_linux_amd64


chmod +x k10multicluster_4.0.9_linux_amd64

sudo mv k10multicluster_4.0.9_linux_amd64 /usr/local/bin/k10multicluster

Download K10Multicluster

Next let’s list out our available clusters we can connect to from our node

kubectl config get-contexts

kubectl config get-contexts

Setup Primary Cluster

Now we are ready to setup our primary cluster by running the following command: Continue reading Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

Terraform Header

Terraform vSphere Provider – Error while creating vApp properties

The Issue

When using Terraform to deploy a virtual machine OVA using Terraform, I kept hitting the below error:

Error: error while creating vapp properties config unsupported vApp properties in vapp.properties: [vm.vmname vami.gateway.DMS_agent_VA vami.netmask0.DMS_Agent_VA vami.DNS.DMS_Agent_VA vami.searchpath.DMS_Agent_VA vami.ip0.DMS_Agent_VA vami.domain.DMS_Agent_VA]

  on Agent_appliance/main.tf line 20, in resource "vsphere_virtual_machine" "vm":
  20: resource "vsphere_virtual_machine" "vm"

Pretty simple right? In my Terraform file I was trying to use OVF Properties that were not valid. Getting the debug/trace logs from terraform also just showed the same error output.

However running ovftool, confirmed my properties were correct. (shortened output example).

ClassId:     vami
  Key:         searchpath
  InstanceId   DMS_Agent_VA
  Category:    Networking Properties
  Label:       Domain Search Path
  Type:        string
  Description: The domain search path (comma or space separated domain names) 
               for this VM. Leave blank if DHCP is desired.

But also in the vCenter UI, looking at the vApp Properties of a the OVA once deployed, again I could validate the the properties I was using were correct.

vCenter - Virtual Machine vApp Options Properties

Finally an example of the vSphere_virtual_machine resource I was trying to deploy that was causing me issues:

resource "vsphere_virtual_machine" "vm" {
  name             = "${var.agent_vm_name}"
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  folder           = "${var.folder}"
  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  datacenter_id    = "${data.vsphere_datacenter.dc.id}"
  host_system_id = "${data.vsphere_host.host.id}"

  dynamic "ovf_deploy" {
  for_each = "${var.agent_local_ovf_path}" != "" || "${var.agent_remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.agent_local_ovf_path}" != "" ? "${var.agent_local_ovf_path}" : null
  remote_ovf_url = "${var.agent_remote_ovf_path}" != "" ? "${var.agent_remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "Control Plane Network" = data.vsphere_network.network.id
    }
   }
  }

  vapp {
    properties = {
      "vm.vmname" =  "${var.agent_vm_name}",
      "varoot_password" = "${var.varoot_password}",
      "vaadmin_password" = "${var.va_admin_password}",
      "guestinfo.cis.appliance.net.ntp" = "${var.ntp}",
      "vami.gateway.DMS_agent_VA" = "${var.controlplanenetworkgateway}",
      "vami.DNS.DMS_Agent_VA" = "${var.dns}",
      "vami.domain.DMS_Agent_VA" = "${var.domain}",
      "vami.searchpath.DMS_Agent_VA" = "${var.searchpath}",
      "vami.ip0.DMS_Agent_VA" = "${var.agentip0}",
      "vami.netmask0.DMS_Agent_VA" = "${var.agentip0netmask}"
    }
  }
}
The Cause

Yep, you guessed it, there was something wrong with the properties I was trying to configure.

The Fix

Continue reading Terraform vSphere Provider – Error while creating vApp properties

vRealize Operations Header

vRealize Operations – Error: Failed to Test adapter instance – Finding Adapter Logs

The Issue

I had installed the “VMware vRealize Operations Management Pack for Horizon 1.2” into my vROPs instance, and tried to connect my Horizon instance, instantly hitting the helpful error message of:

Failed to Test adapter instance. Reason - Unknown Error. Please contact support team

Obviously, I’d probably done something wrong, but I didn’t want to just call support for help!

vROPs Horizon - failed to test adapter instance

Finding the Cause

So off to find the logs.

  • In the Administration Tab, scroll down on the left-hand navigation pane to Support
  • Select Logs
  • Expand the following folders > /{IP}-MASTER >COLLECTOR>adapters
    • Select the correct folder for your adapter type
  • Click the Log > Click the Blue Go button to load the log

vROPs Adapter Logs

The Fix

As we can see in this environment the issue was quite simple, an illegal character, turns out there was a space in my FQDN.

I correct this and success!

The main reason for this quick blog post, was to show how to find the logs for the adapters so you can troubleshoot things yourself first.

Regards

Dean Lewis

 

Resolving VMC – Objects with non-compliant storage policies in SDDC

The Issue

Overnight I received an email from the VMware Cloud Services platform regarding a VMC environment I am an administrator of. The opening paragraph was as below:

Please be advised that you have VMs and or objects in your VMware Cloud on AWS SDDC that do not comply with the VMC SLA i.e. they have non-compliant storage policies.

Well, this doesn’t sound good. The email trailed off with a list of affected virtual machines and snapshots.

The Cause

This message is a flag on not following best practices in the VMC environment. VMC implements a Managed Storage Policy Profiles (MSPP) which integrate with vSphere VM Storage Policy management into SDDC Management. Ensuring that any workload not assigned a custom storage policy always complies with the services SLA requirements.

In short, if your VMs are part of the managed storage profile, they are covered by the SLAs provided by VMC, and if there’s an outage, you are eligible for credits.

You do have the ability to create your own custom policies as you require, but any VMs that are configured in these policies are not subject to the SLA.

The email is simply a pointer to say “hey we recommend you move those objects to a storage policy covered by the SLA”.

Below we have the custom policy (in this case just the default VSAN policy) and then the provided managed policy which takes the format of “VMC Workload Storage Policy – <Cluster-Name>”

VSAN default policy

VSAN managed policy

The Fix

If you want to resolve this, then here is a quick PowerCLI script to do that for you.

$custompolicy = <Custom Storage Policy Name>
$managedpolicy = <Managed Cluster Policy Name>

# To target the VMs home configuration
$vms = get-vm * | Get-SpbmEntityConfiguration | where {$_.StoragePolicy -like $custompolicy}

foreach ($VM in $vms) { $VM | Set-SpbmEntityConfiguration -StoragePolicy $managedpolicy}

# To target the hard drives of VMs
$hds = get-vm -location Compute-ResourcePool | get-harddisk | Get-SpbmEntityConfiguration | where {$_.StoragePolicy -like $custompolicy}

foreach ($hd in $hds) { $hd | Set-SpbmEntityConfiguration -StoragePolicy $managedpolicy }

Below we can see now that the VSAN environment is now resyncing the data to the new storage policy requirements.

VSAN Resync Objects

If you’ve any questions or concerns about the changes to the storage policies for your production workloads, then as always, contact VMware Support to discuss first.

Regards

Dean Lewis