Tag Archives: Terraform

Folding@Home Header

Deploying the VMware Appliance for [email protected] using Terraform

To simplify the deployment of [email protected] appliances to vSphere environments, I have wrote a set of Terraform configuration files (script).

You will need two packages downloaded to your jump host.

And either download locally the VMware [email protected] Appliance, or host it at remote location.

Use the git software to download my Terraform Git folder which contains the folder called Deploy-FAH.

git clone https://github.com/saintdle/Terraform.git

Move into the “Deploy-FAH” folder, and edit the terraform.tfvars file as needed;

cd Deploy-FAH
vi terraform.tfvars

Below is an example;

// Name of the vSphere server. E.g "vcsa.vmware.local"
vsphere_server = "vcenter.veducate.local"

// User on the vSphere server. E.g "administrator@vsphere.local"
vsphere_user = "administrator@vsphere.local"

// Password of the user on the vSphere server. E.g "password"
vsphere_password = "Password1234!"

// Name of the vSphere data center. E.g "datacenter"
vsphere_datacenter = "Datacenter"

// Name of the vSphere cluster. E.g "Cluster"
vsphere_cluster = "Cluster"

// Name or IP of the vSphere host in the cluster to deploy your VM to. E.g "esxi-01" or "192.168.1.20"
vsphere_host = "10.10.2.4"

// Name of the vSphere data store to use for the VMs. E.g "VSAN"
vsphere_datastore = "Datastore"

// Network to connect virtual machine
vm_network = "Freale_NW1"

// Number of instances to deploy
instance_count = 2

// VM Machine Name (an index will be appended i.e FAH-1, FAH-2,)
vm_name = "dean-test"

// Number of CPUs to set on deployed Virtual Machines
num_cpu = 2

// Memory to set on deployed Virtual Machines (in MB)
memory = 4096

// Name of vSphere Resouce Pool to be created. E.g "FAH-VMs"
vsphere_resource_pool = "dean-test"

// Name of VM folder to be created. E.g "FAH-VMs"
vsphere_vm_folder = "dean-test"

// Location of OVA file if using a local location - if using remote location, leave this as null
local_ovf_path = "/home/dean/Deploy-FAH-3/VMware-Appliance-FaH_1.0.4.ova"

// Location of OVA file if using a remote location - if using local location, leave this as null
remote_ovf_path =

// Enable SSH in FAH Appliance (True or False)
ssh_enable = "True"

// FAH appliance root password
root_password = "VMware1!"

// FAH Username you wish to be associated with in the statistics tables
fah_user = ""

// FAH Team you wish to be associated with in the statistics tables
fah_team = "52737"

// FAH Passkey to verify your user in the statistical tables (this is optional from FAH project)
fah_passkey = "unique_id"

That’s it, no more changes needed, it’s as simple as running the following to deploy your appliances;

#This will download the terraform providers as needed

terraform init

#This will show you the planned changes and make sure they are possible

terraform plan

#This will run the configuration to run the deployment

terraform apply

You can use the latest version of Terraform, version 0.13.5 as of the publishing of this post.

Quick notes

This terraform configuration uses some advance configuration in the folder “FAH-Appliance”, under the main.tf file. Here it reads the “remote_ovf_path” variable, and acts based on if it is null or not. If there is a variable set, then it runs the command to deploy from a remote location. If variable is null, then it looks to the “local_ovf_path”, and processes this to deploy an OVF/OVA from the local location.

  dynamic "ovf_deploy" {
  for_each = "${var.local_ovf_path}" != "" || "${var.remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.local_ovf_path}" != "" ? "${var.local_ovf_path}" : null
  remote_ovf_url = "${var.remote_ovf_path}" != "" ? "${var.remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "VM Network" = data.vsphere_network.network.id
    }
   }
  }

Thanks to Grant Orchard from HashiCorp helping me with this part of the config.

Interesting in where you can take this further, check out this post from Robert Jenson, using VMware CodeStream for an Infrastructure as Code deployment using GitHub as a source repository, and terraform for the deployment.

Regards

OpenShift

How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

Install OpenShift 4.x on vSphere 6.x/7.x

The following procedure is intended to create VM’s from an OVA template booting with static IP’s when the DHCP server can not reserve the IP addresses.

The Problem

OCP requires that all DNS configurations be in place. VMware requires that the DHCP assign the correct IPs to the VM. Since many real installations require the coordination with different teams in an organization, many times we don’t have control of DNS, DHCP or Load balancer configurations.

The CoreOS documentation explain how to create configurations using ignition files. I created a python script to put the network configuration using the ignition files created by the openshift-install program.

Reference Architecture

For this guide, we are going to deploy 3 master nodes (control-plane) and 2 worker nodes (compute This guide uses RHEL CoreOS 4.3 as the virtual machine image, deploying Red Hat OCP 4.3, as per the support of N-1 from Red Hat.

We will use a centralised Linux server (Ubuntu) that will perform the following functions;

  • Load Balancer – HAProxy
  • Web Server – Apache2
  • Terraform automation host – version 0.11.14
    • The deployment will be semi-automated using Terraform, so that we can easily build configuration files used by the CoreOS VM’s that have Static IP settings.
    • Using a later version of Terraform will cause failures.
  • Client Tools for OpenShift deployment
    • OC
    • Kubectl
    • Openshift-install

DNS will be provided by a Windows Server.

The installation will use a Bootstrap server to bring the cluster online, which will be removed at the end of the build process.

Deployment Steps

In this guide we will deploy our environment in the following order;

  • Configure DNS
  • Import Red Hat Core OS image into vCenter
  • Deploy Ubuntu Host
    • Configure Apache
    • Configure HAProxy
    • Install Client-Tools
    • Install Terraform
  • Build OpenShift Cluster configuration
  • Configuring the Terraform deployment
  • Running the Terraform deployment
DNS

Openshift uses a “clusterName.BaseDomain” format.

For example; I want to call my Openshift cluster Demo. And my DNS Domain is Simon.local, then my full format used by Openshift is “demo.simon.local”

Below is a table plan of the IP addresses you will use to build the environment.

The last three addresses are cluster level resources that are available on each control-plane node, accessible via the load balancer.

To configure the DNS records in Windows, you can use the Script and CSV file here

In the below screenshot, the script has created the “demo” domain folder and entered my records. It is important that you have PTR records setup for everything apart from the “etcd-X” records.

Import Red Hat CoreOS Image into vCenter

Continue reading How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform