Category Archives: VMware

Kasten K10 Header

Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

In this blog post I’m going to cover setting up the Multi-cluster support for Kasten when you’ve installed the software to multiple Kubernetes clusters.

One K10 cluster you have deployed will become the primary node. You will use this node and dashboard interface to access the cluster UI.

  • The primary cluster defines policies and other configuration centrally. Centrally defined policies and configuration can then be distributed to designated clusters to be enacted.

Additional clusters are then added in and are called Secondaries.

  • The secondary clusters receive policies and other configuration from the primary cluster. Once policies are distributed to a secondary, the local K10 installation enacts the policy. This ensures that the policy will continue to be enforced, even if disconnected from the primary.

Pre-Requisites:

  • Authentication
    • Token Authentication must be used
  • Network
    • Secondary K10’s ingress must be accessible by the primary
    • Secondary API Server must be accessible by the primary
  • Run the tool on a bastion host that has connectivity using kubectl to all of the clusters you want to bring together.
Download the K10MultiCluster tool
  • Set the tool as executable
  • Move the tool to your /usr/local/bin/ folder
curl -LJO https://github.com/kastenhq/external-tools/releases/download/4.0.9/k10multicluster_4.0.9_linux_amd64


chmod +x k10multicluster_4.0.9_linux_amd64

sudo mv k10multicluster_4.0.9_linux_amd64 /usr/local/bin/k10multicluster

Download K10Multicluster

Next let’s list out our available clusters we can connect to from our node

kubectl config get-contexts

kubectl config get-contexts

Setup Primary Cluster

Now we are ready to setup our primary cluster by running the following command: Continue reading Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

Terraform Header

Terraform vSphere Provider – Error while creating vApp properties

The Issue

When using Terraform to deploy a virtual machine OVA using Terraform, I kept hitting the below error:

Error: error while creating vapp properties config unsupported vApp properties in vapp.properties: [vm.vmname vami.gateway.DMS_agent_VA vami.netmask0.DMS_Agent_VA vami.DNS.DMS_Agent_VA vami.searchpath.DMS_Agent_VA vami.ip0.DMS_Agent_VA vami.domain.DMS_Agent_VA]

  on Agent_appliance/main.tf line 20, in resource "vsphere_virtual_machine" "vm":
  20: resource "vsphere_virtual_machine" "vm"

Pretty simple right? In my Terraform file I was trying to use OVF Properties that were not valid. Getting the debug/trace logs from terraform also just showed the same error output.

However running ovftool, confirmed my properties were correct. (shortened output example).

ClassId:     vami
  Key:         searchpath
  InstanceId   DMS_Agent_VA
  Category:    Networking Properties
  Label:       Domain Search Path
  Type:        string
  Description: The domain search path (comma or space separated domain names) 
               for this VM. Leave blank if DHCP is desired.

But also in the vCenter UI, looking at the vApp Properties of a the OVA once deployed, again I could validate the the properties I was using were correct.

vCenter - Virtual Machine vApp Options Properties

Finally an example of the vSphere_virtual_machine resource I was trying to deploy that was causing me issues:

resource "vsphere_virtual_machine" "vm" {
  name             = "${var.agent_vm_name}"
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  folder           = "${var.folder}"
  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  datacenter_id    = "${data.vsphere_datacenter.dc.id}"
  host_system_id = "${data.vsphere_host.host.id}"

  dynamic "ovf_deploy" {
  for_each = "${var.agent_local_ovf_path}" != "" || "${var.agent_remote_ovf_path}" != "" ? [0] : []
  content {
  // Path to local or remote ovf/ova file
  local_ovf_path = "${var.agent_local_ovf_path}" != "" ? "${var.agent_local_ovf_path}" : null
  remote_ovf_url = "${var.agent_remote_ovf_path}" != "" ? "${var.agent_remote_ovf_path}" : null
   disk_provisioning    = "thin"
   ovf_network_map = {
        "Control Plane Network" = data.vsphere_network.network.id
    }
   }
  }

  vapp {
    properties = {
      "vm.vmname" =  "${var.agent_vm_name}",
      "varoot_password" = "${var.varoot_password}",
      "vaadmin_password" = "${var.va_admin_password}",
      "guestinfo.cis.appliance.net.ntp" = "${var.ntp}",
      "vami.gateway.DMS_agent_VA" = "${var.controlplanenetworkgateway}",
      "vami.DNS.DMS_Agent_VA" = "${var.dns}",
      "vami.domain.DMS_Agent_VA" = "${var.domain}",
      "vami.searchpath.DMS_Agent_VA" = "${var.searchpath}",
      "vami.ip0.DMS_Agent_VA" = "${var.agentip0}",
      "vami.netmask0.DMS_Agent_VA" = "${var.agentip0netmask}"
    }
  }
}
The Cause

Yep, you guessed it, there was something wrong with the properties I was trying to configure.

The Fix

Continue reading Terraform vSphere Provider – Error while creating vApp properties

vRealize Operations Header

vRealize Operations – Error: Failed to Test adapter instance – Finding Adapter Logs

The Issue

I had installed the “VMware vRealize Operations Management Pack for Horizon 1.2” into my vROPs instance, and tried to connect my Horizon instance, instantly hitting the helpful error message of:

Failed to Test adapter instance. Reason - Unknown Error. Please contact support team

Obviously, I’d probably done something wrong, but I didn’t want to just call support for help!

vROPs Horizon - failed to test adapter instance

Finding the Cause

So off to find the logs.

  • In the Administration Tab, scroll down on the left-hand navigation pane to Support
  • Select Logs
  • Expand the following folders > /{IP}-MASTER >COLLECTOR>adapters
    • Select the correct folder for your adapter type
  • Click the Log > Click the Blue Go button to load the log

vROPs Adapter Logs

The Fix

As we can see in this environment the issue was quite simple, an illegal character, turns out there was a space in my FQDN.

I correct this and success!

The main reason for this quick blog post, was to show how to find the logs for the adapters so you can troubleshoot things yourself first.

Regards

Dean Lewis

 

Resolving VMC – Objects with non-compliant storage policies in SDDC

The Issue

Overnight I received an email from the VMware Cloud Services platform regarding a VMC environment I am an administrator of. The opening paragraph was as below:

Please be advised that you have VMs and or objects in your VMware Cloud on AWS SDDC that do not comply with the VMC SLA i.e. they have non-compliant storage policies.

Well, this doesn’t sound good. The email trailed off with a list of affected virtual machines and snapshots.

The Cause

This message is a flag on not following best practices in the VMC environment. VMC implements a Managed Storage Policy Profiles (MSPP) which integrate with vSphere VM Storage Policy management into SDDC Management. Ensuring that any workload not assigned a custom storage policy always complies with the services SLA requirements.

In short, if your VMs are part of the managed storage profile, they are covered by the SLAs provided by VMC, and if there’s an outage, you are eligible for credits.

You do have the ability to create your own custom policies as you require, but any VMs that are configured in these policies are not subject to the SLA.

The email is simply a pointer to say “hey we recommend you move those objects to a storage policy covered by the SLA”.

Below we have the custom policy (in this case just the default VSAN policy) and then the provided managed policy which takes the format of “VMC Workload Storage Policy – <Cluster-Name>”

VSAN default policy

VSAN managed policy

The Fix

If you want to resolve this, then here is a quick PowerCLI script to do that for you.

$custompolicy = <Custom Storage Policy Name>
$managedpolicy = <Managed Cluster Policy Name>

# To target the VMs home configuration
$vms = get-vm * | Get-SpbmEntityConfiguration | where {$_.StoragePolicy -like $custompolicy}

foreach ($VM in $vms) { $VM | Set-SpbmEntityConfiguration -StoragePolicy $managedpolicy}

# To target the hard drives of VMs
$hds = get-vm -location Compute-ResourcePool | get-harddisk | Get-SpbmEntityConfiguration | where {$_.StoragePolicy -like $custompolicy}

foreach ($hd in $hds) { $hd | Set-SpbmEntityConfiguration -StoragePolicy $managedpolicy }

Below we can see now that the VSAN environment is now resyncing the data to the new storage policy requirements.

VSAN Resync Objects

If you’ve any questions or concerns about the changes to the storage policies for your production workloads, then as always, contact VMware Support to discuss first.

Regards

Dean Lewis