vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003

Ok, so I’m just going to call it out straight away, when using wildcard SSL certificates with vRealize Automation 8.0, read the release notes.

I did not, and caused myself quite a few headaches with the deployment, which you can read about further in this post.

Cannot set wildcard certs for certain domain names, specifically those not using a Public Suffix.

vRealize Automation 8.0 supports setting a wildcard certificate only for DNS names that match the content of the Public Suffix List ([https://publicsuffix.org/]) 

For example, a valid wildcard certificate: you can use a wildcard certificate with DNS name like "*.myorg.com". This is supported because "com" is part of the Public Suffix List. 

An invalid wildcard certificate example: you cannot use a wildcard certificate with DNS name like "*.myorg.local".This is not supported because "local" is not part of Public Suffix List. 

Workaround: Only use domain names in the Public Suffix List.

The issues caused by using an unsupported wildcard SSL

When deploying vRA 8.0 via vRSLCM, either as part of the easy installer or as part of an existing vRSLCM setup, you will asked to provide an SSL certificate.

This does not validate your certificate is supported for use with the vRA 8.0 deployment. vRSLCM will do some checking on the SSL selected, but is only to ensure the SSL certificate is not about to expire, you will see a Green tick and “healthy” status as below.

Once you hit deploy, you will find your vRA appliance finally stood up, however the initialization tasks will stall.

Error Code: LCMVRAVACONFIG590003
Cluster Initialization failed on VRA.

vRA Initialize Cluster failed on vRA VA - ***Hostname***. Please login to the vRA and check /var/log/deploy.log file for more information on failure.

We know the new architecture is based on Kubernetes, you can read about this here.

So we log into the appliance to see what exactly is going on.

We can check the health of vRA appliance with the following commands;

vracli status

vracli status deploy

In the deploy command we are pointed to an issue (which we already know is the case) (I also forgot to get a screenshot of this).

By using the ‘kubectl’ command line, we are able to see that the namespace ‘prelude’ is having service issues, when we look into the services that make up this namespace we see that the offending service is “user-profile-service-app-XXXX”, and also the “cgs-service-XXXX” is affected too.

I restarted the pods by deleting the existing pods and letting kubernetes restore the state, however the services still did not run.

Note: It is not supported by VMware for the kubernetes environment running inside of the vRA Components to be altered without strict guidance via a VMware Support ticket and with assistance from our GSS Team!

Basically, if you do what I did, you are on your own!

You can get the support bundle from the vRA appliance by using SSH to connect to the appliance and running the following command;

vracli log-bundle

By doing this on my environment and reviewing the logs, we found;

logs from user-profile-service-app are as follows,

Caused by: org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://sc-dc1-vra001.simon.local/csp/gateway/am/api/auth/token-public-key": Certificate for <sc-dc1-vra001.simon.local> doesn't match any of the subject alternative names: [*.simon.local]; nested exception is javax.net.ssl.SSLPeerUnverifiedException: Certificate for <sc-dc1-vra001.simon.local> doesn't match any of the subject alternative names: [*.simon.local]

So there we have it, after troubleshooting the issue, I confirmed the issue, then found out its part of the release notes.

There is an open action to allow for Wildcard support for non public suffixes in the future of vRA 8.0.



vRSLCM 8.0 – vROPs 7.5 upgrade fails due to Admin password expiry

When the vRealize 8 products dropped, I was like a kid in a sweet shop, upgrading everything as quick as possible before my customers tried to, so I could encounter any issues first, but also the new features, so I could show them off.

The issue

During the upgrade of vROPs, I hit an issue that my Local Admin account in vROPs had expired, but I received no warning when using the vROPs 7.5 instance and logged into the interface using the Admin account.

Before I found the issue;

During the upgrade in vRSLCM, my upgrade task failed with “vROPS upgrade failure”, Error Code: LCMVROPSYSTEM25008, Upgrade.pak_pre_apply_validate_failed.

I downloaded the vRLCM support bundle to see if I could find more information on why the upgrade failed but just found the same information.

See this documentation link for the logs location on the appliance.

The log file you are interested in is;


And below you can see the same error information in the log.

I then proceeded to troubleshoot the vROPs environment directly, as we know the upgrade was underway before the failure.

  • You can follow the instructions to create a support bundle here KB 2074601.

For upgrades, we need to look at the following file;

/Log Bundle/Logs/pakManager/vcopsPakManager.root.query.log

And it is here in this log file, I could see the  Query status result;


With another upgrade validation log to check.

"resource_arguments": [

I then used Putty to connect to my vROPs instance, and low and behold, I was prompted to change the password of my Admin account.


The Fix

Well it’s simple, change the admin password as above, and then update the password stored in vRSLCM, re-run the upgrade and this time if the stars align, it will be successful.

Or follow this guide if you are locked out;



vRSLCM 8.0 – Default Storage is too small

PSA: The default deployment of vRealize LifeCycle Manager 8 configuration has a storage disk which is too small for the most tasks.

I found this issue after working with Ryan Johnson (https://twitter.com/tenthirtyam), who is working on the next VVD (https://docs.vmware.com/en/VMware-Validated-Design/index.html) update.

This issue

Once I migrated to the latest version, I tried to upgrade my vROPs and vRLI deployments to the latest version, to find I hit an odd error.

The first vROPs upgrade failed with “Unknown_System_Error”, below you can see the failed request, and then error message once I click into that request.

When I tried to run through the vROPs upgrade again, I received the below error during the precheck phase “No space left on device”

At first I thought this was related to the vROPs install itself, as this is my lab environment and I’ve not exactly followed best practices for sizing vROPs. I checked the appliance itself;

  • SSH to appliance
  • Run “df -lh”

This showed a lot of space free;

I checked the vRSLCM appliance as well, and could see a large amount of space free, and the upgrade pak file was already downloaded;

The fix

After some digging, I found that the LCM install storage device did not have enough space to work with after the update files were downloaded. Quite simple fix, just extend the storage.

You can do this in the LCM interface itself;

  1. Go to LifeCycle Operations from the homescreen dashboard
  2. Select System Administration from the left hand navigation column, and then System details
  3. Click Extend Storage, and provide the vCenter details where your appliance is located, and the max storage size you want the appliance to be extended too.

LCM will take care of the rest. Then proceed to upgrade you environments. As a maintenance task, I would advise clearing down product packages you no longer need from LCM.

Running again, I can see the Precheck is successful.



VMware LifeCycle Manager – Migration error “SSH is not enabled or invalid” – LCMMIGRATION15102

During my migration from vRSLCM 2.1 patch 2 to the latest version 8 release, I encountered the following error;

Error Code: LCMMIGRATION15102

vRSLCM Migration Failed with SSH is not enabled or Root credential invalid. Please make sure SSH is enabled or porvide the correct root credential by adding the credential to the home page locker app

Pretty obvious error, however the provided root credentials were correct, and I could use putty to connect to my existing LCM instance.

The fix

I spoke with an internal VMware employee about this, the suggestion was to just create the same authentication details again in the locker and chose them on the retry. However I decided to just go ahead and reset the SSH user on my old environment as a precaution as well.

1. Old LCM Instance > Go to Settings > System Administration

Scroll down find the section to reset the “root” user as below, enter the new password and confirm, then select save.

I would recommend you test connecting to the old LCM instance using ssh and the new creds at this stage.

2. New LCM instance, go to the locker app, and then click for passwords, you will see just icons on the left hand side but you can click the >> to expand the navigation pane.

3. Add your new credentials and save

4. Go back to Requests, find your failed task under “invokemigration” and select to retry.

You will be given an option to select which credentials you want to retry with, select your new credentials object and hit submit!

(This type of feature where you can respecify the variables on a retry is something I’ve asked for a lot!).

5. And fingers crossed you will then see the request complete successfully.



VMware vRealize LifeCycle Manager 8 – Migration Process Screenshots

VMware vRealize LifeCycle Manager 8 released earlier this week, 17th October 2019.

Note the official name and abbreviation, its a long one!

  • vRSLCM (vRealize Suite LifeCycle Manager)

You can find the supporting official documentation here;

What's New Blog Link:
What's New Blog Post

Download Link:
Product Download

Release Notes:
Release Notes

Documentation Link:
Migration Process

The best news about this release is the “easy installer“, which also allows you to migrate from older versions. In this post, I’ve documented the screenshots in steps for you, as I know many of you out there like to see the end to end process before you undergo an update yourself, so you know what to expect.

During this migration process the following will happen;

  1. New LCM virtual appliance deployed
  2. New IDM appliance deployed (unless you select to link to an existing environment)
  3. Existing LCM settings and content will be migrated
Migration Process Screenshots

1. Load up the Easy Installer UI and select the Migrate option

1. After you download the file, mount the vra-lcm-installer.iso file.
2. Browse to the folder vrlcm-ui-installer inside the CD-ROM.
3. The folder contains three sub-folder for three operating systems. Based on your operating system, browse to the corresponding operating system folder inside vrlcm-ui-installer folder.
4. Run the executable as per the correct steps for your OS.

2. You’ll get the below introduction page explaining the Migrate option, and some pre-req info.

3. As with every software, there is a EULA to accept.

4. Select the target vCenter environment where you want the LifeCycle Manager and Identity Manager (if needed) appliances to be deployed.

You will be asked to confirm the connection SSL Thumbprint for the vCenter provided.

5. Select the Datacenter or VM folder within your target vCenter that you want to deploy the Virtual Appliances.

6. Select the compute resource within your target vCenter to deploy the Virtual Appliances

7.  Select the storage location for your virtual appliances, at this stage I am unaware if you are able to select different datastores for each Virtual Appliance.

7. Provide details for the Network configuration for the new virtual appliances. For the easy installer it is assumed you will be deploying both to the same network subnet range.

8. Provide the default passwords for both virtual appliances, this password will be used for the following accounts;

  • vRealize LifeCycle Manager
    • Root Password
    • Admin Password
  • VMware Identity Manager
    • Root Password
    • Admin Password
    • sshuser password
    • Default Configuration User Password (You will configure the name of this account later)

9. Configuration of the LifeCycle Appliance deployment

  • Name of VM in vCenter
  • IP address
  • Hostname  (FDQN as in DNS)

10. Next you will provide the details of the existing LifeCycle Manager you are migrating from. The wizard does not seem to do any prechecks of the information provided, like it does when you connect to vCenter at the start.

11.Configuration of the VMware Identity Manager appliance

  • Install a New Identity Manager
    • Name of VM in vCenter
    • IP address
    • Hostname  (FDQN as in DNS)
    • Default Configuration Admin (Provide a name that is not root or admin)
  • Import Existing VMware Identity Manager
    • No configuration necessary, it will be pulled across from the existing LCM environment.

12. The usual Summary page of all the options you have selected/configured.

13. Finally as the process runs, you will get a progress bar with the various stages. And then once complete, a link to the new vRSLCM (vRealize Suite LifeCycle Manager) UI and the request that’s created to migrate the data from the old vRSLCM environment.

So this concludes the post!