VMware.cloud .logo

vCenter patching failed to update the VAMI build “Got exception while trying to save metadata to a file: Unexpected content in /etc/issue file”

The issue

After patching/upgrading your vCenter 6.7 appliance, the vCenter UI shows the latest build number, but in VAMI you see the older VAMI build number.

To troubleshoot upgrade issues, you can look at the following file;

  • /var/log/vmware/software-packages.log

In the log, you see the following error;

INFO:vmware.vherd.base.software_update:Setting appliance version to 6.7.0.31000 build 13643870

ERROR:vmware.vherd.base.software_update:Got exception while trying to save metadata to a file: Unexpected content in /etc/issue file. Data: {Unique_Data}

The cause

This issue is thrown when a custom login banner is set by configuring the advanced setting “config.etc.issue” and the default values which include the version number and deployment type have been removed.

Default lines example;

VMware vCenter Server Appliance 6.7.0.31000
Type: vCenter Server with an external Platform Services Controller
  • William Lam documents how to configure custom banners in this blog post.

The Fix

To workaround this issue follow the steps below:

  • Modify the /etc/issue file to the original before patching.

The file ‘/etc/issue’ contents can be customized but the defaults lines which has the version number and deployment type must be kept for patching to succeed.

  • Check the VAMI page for product version and type and update the /etc/issue file accordingly.
Example: /etc/issue  :: (Original Content from a LAB).
Note line 1 and 3 should be blank. Line 2 will have the version and line 4 will have the deployment type, as shown in the below example:

root@vcsa1 [ ~ ]# less -N /etc/issue
      1
      2 VMware vCenter Server Appliance 6.7.0.31000
      3
      4 Type: vCenter Server with an external Platform Services Controller
      5
/etc/issue (END)

This issue will be fixed in a future release.

Note: Since I originally drafted this blog post, VMware have now produced an external KB.
https://kb.vmware.com/s/article/76024

Regards

Kubernetes

How To Fix A PVC Stuck in Terminating Status in Kubernetes: Troubleshooting Guide

Having trouble deleting a persistent volume claim (PVC) stuck interminating status in Kubernetes/Openshift? We‘ve got the fix. Read on to learn how to patch the PVC to allow the final unmount and delete the PVC.

The Issue

Whilst working on a Kubernetes demo for a customer, I was cleaning up my environment and deleting persistent volume claims (PVC) that were no longer need.

I noticed that one PVC was stuck in “terminating” status for quite a while.

Kubernetes pvc terminating

Note: I am using the OC commands in place of kubectl due to this being a Openshift environment

The Cause

I had a quick google and found I needed to verify if the PVC is still attached to a node in the cluster.

kubectl get volumeattachment

I could see it was, and the reason behind this was the configuration for the PVC was not fully updated during the delete process.

Kubernetes pvc terminating kubectl get volumeattachment

The Fix

I found the fix on this github issue log .

You need to patch the PVC to set the “finalizers” setting to null, this allows the final unmount from the node, and the PVC can be deleted.

kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'

Kubernetes pvc terminating kubectl patch pvc

Regards

OpenShift

Using the vSphere CSI Driver with OpenShift 4.x and VSAN File Services

You may have seen my blog post “How to Install and configure vSphere CSI Driver on OpenShift 4.x“.

Here I updated the vSphere CSI driver to work the additional security constraints that are baked into OpenShift 4.x.

Since then, once of the things that has been on my list to test is file volumes backed by vSAN File shares. This feature is available in vSphere 7.0.

Well I’m glad to report it does in fact work, by using my CSI driver (see above blog or my github), you can simply deploy consume VSAN File services, as per the documentation here. 

I’ve updated my examples in my github repository to get this working.

OK just tell me what to do…

First and foremost, you need to add additional configuration to the csi conf file (csi-vsphere-for-ocp.conf).

If you do not, the defaults will be assumed which is full read-write access from any IP to the file shares created.

[Global]

# run the following on your OCP cluster to get the ID 
# oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
cluster-id = c6d41ba1-3b67-4ae4-ab1e-3cd2e730e1f2

[NetPermissions "A"]
ips = "*"
permissions = "READ_WRITE"
rootsquash = false

[VirtualCenter "10.198.17.253"]
insecure-flag = "true"
user = "[email protected]"
password = "Admin!23"
port = "443"
datacenters = "vSAN-DC"
targetvSANFileShareDatastoreURLs = "ds:///vmfs/volumes/vsan:52c229eaf3afcda6-7c4116754aded2de/"

Next, create a storage class which is configured to consume VSAN File services.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: file-services-sc
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
csi.storage.k8s.io/fstype: "nfs4" # Optional Parameter

Then create a PVC to prove it works. Continue reading Using the vSphere CSI Driver with OpenShift 4.x and VSAN File Services

Veeam vRA Header

How to backup vRealize Automation 8.x using Veeam

In this blog post I am going to dissect backing up vRealize Automation 8.x using Veeam Backup and Replication.

- Understanding the backup methods
- Performing an online backup
- Performing an offline backup

Understanding the Backup Methods

Reading the VMware documentation around this subject can be somewhat confusing at times. And if you pay attention, there are subtle changes between the documents as well. Lets break this down.

  • vRealize Automation 8.0
    • As part of the backup job, you need to run a script to stop the services
    • This is known as an offline backup
    • Depending on your backup software, you can either do this by running a script located on the vRealize Automation appliance or by triggering using the pre-freeze/post-freeze scripts when a snapshot is taken of the VM.
    • The snapshot must not include the virtual machines memory.
    • If you environment is a cluster, you only need to run the script on a single node.
    • All nodes in the cluster must be backed up at the same time.
  • vRealize Automation 8.0.1 and 8.1 (and higher)
    • It is supported to run an online backup
      • No script is needed to shut down the services
    • Snapshot taken as part of the backup must quiesce the virtual machine.
    • The snapshot must not include the virtual machines memory.
    • It is recommended to run the script to stop all services and perform an offline backup.
      • You may also find your backup runs faster, as the virtual machine will become less busy.

Performing an Online Backup

Let’s start with the easier of the two options. Again, this will be supported for vRealize Automation 8.0.1 and higher. Continue reading How to backup vRealize Automation 8.x using Veeam