Tag Archives: Terraform

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Getting Started

This blog post will cover deploying the infrastructure and components for Data Management for VMware Tanzu.

My second blog post will cover using this infrastructure for Self-Service Database-as-a-Service.

What is Data Management for VMware Tanzu?

Data Management for VMware Tanzu (DMS) is a newly released solution from VMware (July 2021) providing data-as-a-service toolkit for on-demand provisioning & automated management of MySQL and PostreSQL databases on vSphere platforms.

DMS is accessible as both a Graphical UI and via REST API, to meet the needs of administrators and developers and their consumption needs.

With DMS, it provides the ability to create and manage data services through a centralized platform in a self-service fashion, with the following features:

  • Simplified management for admins, acting as a Database fleet management tool; presenting a view of the organization’s database instances running on multi-cloud infrastructure.
  • Database users have the ability to consume self-service capabilities to create new database instances, or to operate on existing instances safely and securely, without requiring infrastructure or database expertise.
  • DMS also provides full automation for provisioning data service instances, backups, security patches, and periodic updates of the data service engine.

Data Management for Tanzu Provider Home Page

Data Management for Tanzu Provider Create Database Page

Understanding the components

DMS is made up of the following architectural components:

  • Provider – this is the core appliance you will deploy, which offers the central UI and API for all users to interact with the Data services and functions. It acts as the control plane to the other components.
  • Agent – These appliances are deployed to extend the control plan into the various vSphere environments, providing a point of presence for provisioning and management operations of the Services deployed.
  • Service – These are photon appliances which host the deployed instance of the data service (database). They communicate with the Agent that deployed them, via a private API. DMS supports the deployment of MySQL and PostgreSQL currently.
  • Template Repo – publishes a set of Data Management for VMware Tanzu Database Templates on Tanzu Network. The provider will poll the Tanzu Network periodically for new templates. There is also a method to handle air-gap environments.

S3 storage is required to be used for several items such as location to store the templates, database configurations and database backups.

Full deployment models for the components can be found here.

Data Management for Tanzu Architecture

Understanding Organisations and User Access

DMS implements the concept of Organisations to provide a logical grouping of users. There are two types:

  • Provider Org – A type of organization to which one or more Provider Administrator user belongs.
    • One provider org can exist in a single DMS installation.
    • This is automatically created during the deployment of the Provider Appliance
    • The Provider Org name is the company name specified at deployment.
  • Agent Org – A type of organization with one or more Organization Administrator or Organization User members.
    • These orgs are created via the DMS UI/API once the Provider appliance has been deployed and can be created at any time.

DMS pre-defines these three user roles:

  • Provider Administrator
    • This is the single Provider Role in the installation
    • Among other tasks, users in this role can import additional Provider Administrator users, create organizations, and create and import organization users
  • Organization Administrator
  • Organization User

The Provider Administrator user will assign a role to each DMS user that they create or import in an organization.

A user that is assigned the Organization Administrator role can manage all services in the organization to which they belong. A user assigned the Organization User can manage only the services that they provision.

More detailed information on the User roles and responsibilities can be found here.

Getting Started

Now first and foremost, I’ll point you towards the official documentation to use as a reference to review alongside this blog post.

Prerequisites

There are always several things to get sorted before you ever dive right in! The official requirements are detailed here, I’m going to call out some of the more finicky pieces you need to be aware of. Continue reading Data Management for VMware Tanzu – Getting Started

Terraform Header

Terraform vSphere Provider – Error while creating vApp properties

The Issue

When using Terraform to deploy a virtual machine OVA using Terraform, I kept hitting the below error:

Error: error while creating vapp properties config unsupported vApp properties in vapp.properties: [vm.vmname vami.gateway.DMS_agent_VA vami.netmask0.DMS_Agent_VA vami.DNS.DMS_Agent_VA vami.searchpath.DMS_Agent_VA vami.ip0.DMS_Agent_VA vami.domain.DMS_Agent_VA]

  on Agent_appliance/main.tf line 20, in resource "vsphere_virtual_machine" "vm":
  20: resource "vsphere_virtual_machine" "vm"

Pretty simple right? In my Terraform file I was trying to use OVF Properties that were not valid. Getting the debug/trace logs from terraform also just showed the same error output.

However running ovftool, confirmed my properties were correct. (shortened output example).

ClassId:     vami
  Key:         searchpath
  InstanceId   DMS_Agent_VA
  Category:    Networking Properties
  Label:       Domain Search Path
  Type:        string
  Description: The domain search path (comma or space separated domain names) 
               for this VM. Leave blank if DHCP is desired.

But also in the vCenter UI, looking at the vApp Properties of a the OVA once deployed, again I could validate the the properties I was using were correct.

vCenter - Virtual Machine vApp Options Properties

Finally an example of the vSphere_virtual_machine resource I was trying to deploy that was causing me issues:

resource "vsphere_virtual_machine" "vm" {
  name             = "${var.agent_vm_name}"
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  folder           = "${var.folder}"
  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  datacenter_id    = "${data.vsphere_datacenter.dc.id}"
  host_system_id = "${data.vsphere_host.host.id}"

  dynamic "ovf_deploy" {
  for_each = "${var.agent_local_ovf_path}" != "" || "${var.agent_remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.agent_local_ovf_path}" != "" ? "${var.agent_local_ovf_path}" : null
  remote_ovf_url = "${var.agent_remote_ovf_path}" != "" ? "${var.agent_remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "Control Plane Network" = data.vsphere_network.network.id
    }
   }
  }

  vapp {
    properties = {
      "vm.vmname" =  "${var.agent_vm_name}",
      "varoot_password" = "${var.varoot_password}",
      "vaadmin_password" = "${var.va_admin_password}",
      "guestinfo.cis.appliance.net.ntp" = "${var.ntp}",
      "vami.gateway.DMS_agent_VA" = "${var.controlplanenetworkgateway}",
      "vami.DNS.DMS_Agent_VA" = "${var.dns}",
      "vami.domain.DMS_Agent_VA" = "${var.domain}",
      "vami.searchpath.DMS_Agent_VA" = "${var.searchpath}",
      "vami.ip0.DMS_Agent_VA" = "${var.agentip0}",
      "vami.netmask0.DMS_Agent_VA" = "${var.agentip0netmask}"
    }
  }
}
The Cause

Yep, you guessed it, there was something wrong with the properties I was trying to configure.

The Fix

Continue reading Terraform vSphere Provider – Error while creating vApp properties

Terraform Header

How to Escape Strings in Terraform with a Dollar Sign ($)

The Issue

When using Terraform to perform an action, and the input is using a $, you can end up with an output such as the below.

│ Error: Invalid character
│ 
│  on main.tf line 104, in resource "vra_blueprint" "this":
│ 104:      network: '${resource.Cloud_Network_1.id}'
│ 
│ This character is not used within the language.

This happened to me when I was using the Terraform vRA Provider to create Cloud Templates (blueprints) in my vRA environment. The vRA cloud templates use a syntax such as ${input.something}, which clashes with the syntax used by Terraform to identify inputs.

The Cause

Terraform implements a interpolations syntax. These interpolations are wrapped in ${}, such as ${var.foo}.

The interpolation syntax is powerful and allows you to reference variables, attributes of resources, call functions, etc.

The Fix

You can escape interpolation with double dollar signs: $${foo} will be rendered as a literal ${foo}.

Terraform Interpolation Syntax example

Regards

Dean Lewis

 

Folding@Home Header

Deploying the VMware Appliance for Folding@Home using Terraform

To simplify the deployment of Folding@Home appliances to vSphere environments, I have wrote a set of Terraform configuration files (script).

You will need two packages downloaded to your jump host.

And either download locally the VMware Folding@Home Appliance, or host it at remote location.

Use the git software to download my Terraform Git folder which contains the folder called Deploy-FAH.

git clone https://github.com/saintdle/Terraform.git

Move into the “Deploy-FAH” folder, and edit the terraform.tfvars file as needed;

cd Deploy-FAH
vi terraform.tfvars

Below is an example;

// Name of the vSphere server. E.g "vcsa.vmware.local"
vsphere_server = "vcenter.veducate.local"

// User on the vSphere server. E.g "[email protected]"
vsphere_user = "[email protected]"

// Password of the user on the vSphere server. E.g "password"
vsphere_password = "Password1234!"

// Name of the vSphere data center. E.g "datacenter"
vsphere_datacenter = "Datacenter"

// Name of the vSphere cluster. E.g "Cluster"
vsphere_cluster = "Cluster"

// Name or IP of the vSphere host in the cluster to deploy your VM to. E.g "esxi-01" or "192.168.1.20"
vsphere_host = "10.10.2.4"

// Name of the vSphere data store to use for the VMs. E.g "VSAN"
vsphere_datastore = "Datastore"

// Network to connect virtual machine
vm_network = "Freale_NW1"

// Number of instances to deploy
instance_count = 2

// VM Machine Name (an index will be appended i.e FAH-1, FAH-2,)
vm_name = "dean-test"

// Number of CPUs to set on deployed Virtual Machines
num_cpu = 2

// Memory to set on deployed Virtual Machines (in MB)
memory = 4096

// Name of vSphere Resouce Pool to be created. E.g "FAH-VMs"
vsphere_resource_pool = "dean-test"

// Name of VM folder to be created. E.g "FAH-VMs"
vsphere_vm_folder = "dean-test"

// Location of OVA file if using a local location - if using remote location, leave this as null
local_ovf_path = "/home/dean/Deploy-FAH-3/VMware-Appliance-FaH_1.0.4.ova"

// Location of OVA file if using a remote location - if using local location, leave this as null
remote_ovf_path =

// Enable SSH in FAH Appliance (True or False)
ssh_enable = "True"

// FAH appliance root password
root_password = "VMware1!"

// FAH Username you wish to be associated with in the statistics tables
fah_user = ""

// FAH Team you wish to be associated with in the statistics tables
fah_team = "52737"

// FAH Passkey to verify your user in the statistical tables (this is optional from FAH project)
fah_passkey = "unique_id"

That’s it, no more changes needed, it’s as simple as running the following to deploy your appliances;

#This will download the terraform providers as needed

terraform init

#This will show you the planned changes and make sure they are possible

terraform plan

#This will run the configuration to run the deployment

terraform apply

You can use the latest version of Terraform, version 0.13.5 as of the publishing of this post.

Quick notes

This terraform configuration uses some advance configuration in the folder “FAH-Appliance”, under the main.tf file. Here it reads the “remote_ovf_path” variable, and acts based on if it is null or not. If there is a variable set, then it runs the command to deploy from a remote location. If variable is null, then it looks to the “local_ovf_path”, and processes this to deploy an OVF/OVA from the local location.

  dynamic "ovf_deploy" {
  for_each = "${var.local_ovf_path}" != "" || "${var.remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.local_ovf_path}" != "" ? "${var.local_ovf_path}" : null
  remote_ovf_url = "${var.remote_ovf_path}" != "" ? "${var.remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "VM Network" = data.vsphere_network.network.id
    }
   }
  }

Thanks to Grant Orchard from HashiCorp helping me with this part of the config.

Interesting in where you can take this further, check out this post from Robert Jenson, using VMware CodeStream for an Infrastructure as Code deployment using GitHub as a source repository, and terraform for the deployment.

Regards

OpenShift

How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

Install OpenShift 4.x on vSphere 6.x/7.x

The following procedure is intended to create VM’s from an OVA template booting with static IP’s when the DHCP server can not reserve the IP addresses.

The Problem

OCP requires that all DNS configurations be in place. VMware requires that the DHCP assign the correct IPs to the VM. Since many real installations require the coordination with different teams in an organization, many times we don’t have control of DNS, DHCP or Load balancer configurations.

The CoreOS documentation explain how to create configurations using ignition files. I created a python script to put the network configuration using the ignition files created by the openshift-install program.

Reference Architecture

For this guide, we are going to deploy 3 master nodes (control-plane) and 2 worker nodes (compute This guide uses RHEL CoreOS 4.3 as the virtual machine image, deploying Red Hat OCP 4.3, as per the support of N-1 from Red Hat.

We will use a centralised Linux server (Ubuntu) that will perform the following functions;

  • Load Balancer – HAProxy
  • Web Server – Apache2
  • Terraform automation host – version 0.11.14
    • The deployment will be semi-automated using Terraform, so that we can easily build configuration files used by the CoreOS VM’s that have Static IP settings.
    • Using a later version of Terraform will cause failures.
  • Client Tools for OpenShift deployment
    • OC
    • Kubectl
    • Openshift-install

DNS will be provided by a Windows Server.

The installation will use a Bootstrap server to bring the cluster online, which will be removed at the end of the build process.

OpenShift Deployment Arch Diagram

Deployment Steps

In this guide we will deploy our environment in the following order;

  • Configure DNS
  • Import Red Hat Core OS image into vCenter
  • Deploy Ubuntu Host
    • Configure Apache
    • Configure HAProxy
    • Install Client-Tools
    • Install Terraform
  • Build OpenShift Cluster configuration
  • Configuring the Terraform deployment
  • Running the Terraform deployment
DNS

Openshift uses a “clusterName.BaseDomain” format.

For example; I want to call my Openshift cluster Demo. And my DNS Domain is Simon.local, then my full format used by Openshift is “demo.simon.local”

Below is a table plan of the IP addresses you will use to build the environment.

The last three addresses are cluster level resources that are available on each control-plane node, accessible via the load balancer.

To configure the DNS records in Windows, you can use the Script and CSV file here

Deploy OpenShift VMware Static IP PowerShell Configure DNS Records

In the below screenshot, the script has created the “demo” domain folder and entered my records. It is important that you have PTR records setup for everything apart from the “etcd-X” records.

Deploy OpenShift VMware Static IP DNS Records Deploy OpenShift VMware Static IP DNS Records 2 Deploy OpenShift VMware Static IP DNS Records 3 Deploy OpenShift VMware Static IP Configure Reverse DNS Records

Import Red Hat CoreOS Image into vCenter

Continue reading How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform