Tag Archives: Upgrade

vRA SaltStack Config Header

vRSLCM – SaltStack Config upgrade fails – LCMUPGRADEVSSC10103

The Issue

When upgrading to vRA SaltStack Config 8.9 using vRealize Suite LifeCycle Manager, I found I hit an issue stating that the upgrade failed as the VAMI version of the appliance was already at the build number to be expected.

Below is a copy of the error message:

LCMUPGRADEVSSC10103

Error Code: LCMUPGRADEVSSC10103
VAMI upgrade for vRealize Automation SaltStack Config failed. Check vRealize Suite Lifecycle Manager logs for more information.
VAMI is already at the version provided for upgrade. Retry the request by passing skipTask as 'true' to skip the VAMI upgrade and proceed further to RAAS upgrade. Check upgrade logs at /var/log/lcm-vami-upgrade.log on the vRealize Automation SaltStack Config host for more details.

com.vmware.vrealize.lcm.vsse.common.exception.VsscUpgardeException: VAMI is already at the version provided for upgrade. Retry the request by passing skipTask as 'true' to skip the VAMI upgrade and proceed further to RAAS upgrade. Check upgrade logs at /var/log/lcm-vami-upgrade.log on the vRealize Automation SaltStack Config host for more details.	at com.vmware.vrealize.lcm.vsse.core.task.VsscVamiUpgradeTask.execute(VsscVamiUpgradeTask.java:96)	at com.vmware.vrealize.lcm
The Fix

Rather than follow the error message, and retry the task by skipping the failure. I instead performed a inventory sync on the environment this part of. Then retried the task without skipping the failure.

This proved successful, leading me to think that maybe vRSLCM missed a collectiong point of information during the upgrade.

  • Go to your environment with SaltStack Config installed
  • Click the options to trigger the inventory sync

vRSLCM - Trigger Inventory Sync

Keep an eye on the requests, and once the inventory sync is completed, now click on your failed upgrade request.

vRSLCM - Requests

Within the request , click to retry.

vRSLCM - Request Details - Retry

And after that you should hopefully see a successfully completed request.

vRSLCM - Request Details - Completed

Regards

Dean Lewis

Tanzu Blog Logo Header

Tanzu Kubernetes Grid – Upgrading a Management and Workload Cluster deployed to vSphere

In this blog post, I am going to walk through how to upgrade both your Tanzu Kubernetes Grid Management and Workload clusters. I’ll cover the Tanzu CLI options, as well as how you can leverage the features of Tanzu Mission Control for upgrades as well.

For my example use cases, I’ll be upgrading from TKG 1.4.2 to 1.5.4. Although the process should be similar for other upgrade paths, I do recommend you consult the official documentation before attempting any upgrade in case there are any changes.

Caution: VMware recommends not installing or upgrading to Tanzu Kubernetes Grid v1.5.0-v1.5.3, due to a bug in the versions of etcd in the versions of Kubernetes used by Tanzu Kubernetes Grid v1.5.0-v1.5.3. Tanzu Kubernetes Grid v1.5.4 resolves this problem by incorporating a fixed version of etcd. For more information, see Resolved Issues in the TKG v1.5 Release Notes.
Pre-requisites

To upgrade Tanzu Kubernetes Grid (TKG), you download and install the new version of the Tanzu CLI on the machine that you use as the bootstrap machine. You must also download and install base image templates and VMs, depending on whether you are upgrading clusters that you previously deployed to vSphere, Amazon EC2, or Azure.

Download the Tanzu CLI and Kubernetes OVAs

On the VMware Customer Portal download both the Tanzu CLI and OVA files as necessary.

I’ve highlighted in the below screenshot; your Management Cluster will always need to run the latest Kubernetes version.

Tanzu Kubernetes Grid - Upgrade - Download Product files - Tanzu CLI - Kubernetes OVAS

Upload Kubernetes OVAs to vCenter

Continue reading Tanzu Kubernetes Grid – Upgrading a Management and Workload Cluster deployed to vSphere

Tanzu Mission Control Header

Tanzu Mission Control – Upgrading Kubernetes for a provisioned cluster

Now we understand how to deploy a Tanzu Kubernetes Cluster using Tanzu Mission Control, let’s look at the next lifecycle step, how to upgrade the Kubernetes version of the cluster.

Below are the other blog posts in the series.

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

When a cluster which has been provisioned by TMC, and therefore managed by TMC, has an available upgrade, you will see an “i” icon next to the version on the clusters UI view, hovering over this will tell you there is an upgrade ready.

TMC - Clusters - Upgrade Available

Click the cluster name to take you into the cluster object to see the full details,

  1. click the actions button
  2. and select upgrade.

TMC - Cluster - Actions - Upgrade

The Upgrade Cluster dialogue will appear. Select the version you want to upgrade to and click upgrade.

TMC - Cluster - Upgrade Cluster - Select Version

On both the Cluster list and Cluster Detailed view, the status will change to upgrading.

TMC - Cluster Upgrading 2TMC - Cluster Upgrading

Once the upgrade has completed, the cluster will change back to ready and show the updated version.

TMC - Cluster upgrade complete

Wrap-up and Resources

In this quick blog post, we used Tanzu Mission Control to upgrade a provisioned Tanzu Kubernetes Grid cluster which was running in AWS. All the steps provided in this blog post can be replicated using the TMC CLI as well.

As a reminder, to take real advantage of TMC I recommend you read the follow posts:

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

You can get hands on experience of Tanzu Mission Control yourself over on the VMware Hands-on-Lab website, which is always free!

Regards

 

kasten by veeam header

How to update Kasten to the latest version

This is probably one of the simplest blog posts I’ll publish.

To see if there is an available update for your Kasten install.

  • In the dashboard click > Settings
  • Click on Support

See if there is a notification under the Cluster Information heading.

kasten dashboard support cluster information

Clicking the “upgrade to version x.x.x” button will take you to this Kasten Docs page.

Or you can follow the same instructions with real life screenshots below.

To upgrade run the following using helm:

helm repo update && \
    helm get values k10 --output yaml --namespace=kasten-io > k10_val.yaml && \
    helm upgrade k10 kasten/k10 --namespace=kasten-io -f k10_val.yaml

helm upgrade k10 kasten k10

You will see messages similar to the below.

helm upgrade k10 kasten k10 - upgrade in progress - upgrade complete

If I now look at my pods in my namespace “Kasten-IO” I can see they are being recreated as the deployment artifacts will have been updated with the new configuration including container images.

helm upgrade k10 kasten - oc get pods -n kasten-io

And finally looking back at my Kasten Dashboard for the cluster information, I can see I am now at the latest version.

helm upgrade k10 kasten dashboard upgrade complete

Regards

 

LCM Migration vRSLCM Easy installer5

vRSLCM 8.0 – vROPs 7.5 upgrade fails due to Admin password expiry

When the vRealize 8 products dropped, I was like a kid in a sweet shop, upgrading everything as quick as possible before my customers tried to, so I could encounter any issues first, but also the new features, so I could show them off.

The issue

During the upgrade of vROPs, I hit an issue that my Local Admin account in vROPs had expired, but I received no warning when using the vROPs 7.5 instance and logged into the interface using the Admin account.

Before I found the issue;

During the upgrade in vRSLCM, my upgrade task failed with “vROPS upgrade failure”, Error Code: LCMVROPSYSTEM25008, Upgrade.pak_pre_apply_validate_failed.

vRSLCM Product update LCMVROPSYSTEM25008 Continue reading vRSLCM 8.0 – vROPs 7.5 upgrade fails due to Admin password expiry