Building a Veeam Lab – Testing Scenarios

In my previous post, I looked at the architecture you may want to implement to test the various features of Veeam Backup and Replication, including features from v10.

I thought it would be a good idea to break down the architecture into sections, and provide some ideas of what features/configurations can be tested in each section. This of course is not an exhaustive list.

I’ve broken down the original diagram into 5 Sections.

Section 1 – VMware Cluster Continue reading Building a Veeam Lab – Testing Scenarios

Building a Veeam Lab – a recommended architecture

This blog posts spawns from an interesting discussion between the Veeam Vanguard members on what components are needed to build an effective lab for testing out most of the Veeam features, especially with v10 around the corner. So, I’ve put together something that should hopefully work for this;

I’m not really going to focus on the platform you’ll be running this on, as you should already have some sort of home lab if you are looking to run a Veeam lab, I’d assume.

  • AD and DNS – I’m just going to presume you have this up and running already.
    • If you are trying to cut down your home lab infrastructure, this blog on running PhotonOS as DNS server and NTP server is helpful
  • vSphere Environment
    • Cluster can be as big as you like or already have as your homelab
    • Standalone host for replica’s, or just target one of your existing hosts directly in the replica configuration
  • Hyper-V standalone host
    • Just for running one or two Virtual machines. But if you are a Hyper-v admin, you probably already have a lab you can use.
  • Backup Repository
    • There a few options you can use, but you need to consider where you store your data, if it is on the vSphere environment itself, you may run out of storage fast. An External NAS would be best.

Veeam Components

Veeam Backup and Replication Server:

This is going to be your main virtual machine, and you can multi-home a few components on here, especially if you are not fussed about the performance.

For the database, you should be OK with the built in SQL Express install.

Sizing minimums:

  • 2 vCPU cores, 8 GB RAM, HDD space 60GB (inclusive of Logs, vPowerNFS, VBR software)
  • Recommendations for sizing;
    1 vCPU core (physical or virtual) and 4 GB RAM per 10 concurrently running jobs.

There are further requirements for the repository services and database services, see here.


  • 4 vCPU cores, 16GB RAM, HDD Space 60GB-100GB

Veeam ONE

I would install this on your VBR server, it’s a great product for monitoring your environment and especially backup components and data protection health.

If you do indeed create any new content that is not available out of the box, make sure you blog about it or share it on the Veeam forums.

Veeam Backup Enterprise Manager

Again, I would install this on the VBR server itself.

Form the official documentation; Enterprise Backup Manager federates backup servers and offers a consolidated view of these servers through a web browser interface. You can centrally control and manage all jobs through a single “pane of glass”, edit and clone jobs, monitor job state and get reporting data across all backup servers.

Although you may only have one VBR server to begin with, you may scale. This also provides you REST API capabilities for the Veeam solutions in your environment.

Backup + File Proxy

The proxy server processes the job, and acts as the data mover between components. In a lab setup, it is recommended to have a number of these. In particular with v10, the new NAS Backup introduces the File Proxy, to handle the file level backup of NAS environments.

VBR will install a local Proxy service, but you’ll have a lot going on with that box, so its worth considering deploying at least 2 separate proxies, as you can see in my diagram, I broke up the infrastructure into multiple parts, so a proxy in each location is a must;

  • Windows OS backup proxy
  • Linux OS backup proxy – this is a new feature in v10
    • Recommend:

Sizing: Minimum 2 vCPU cores, 2GB RAM, 20GB HDD

Recommendation: 2 vCPU cores, 4GB RAM, 20GB

WAN Accelerator

I would personally not deploy these upfront, they require their own appliance, and to work correctly require SSD for their cache. However, to understand their placement and how to deploy them in a Veeam architecture, it is worth planning to have them in your environment even if you only run them for a short time.

Sizing: Minimum, 2 vCPU cores, 8GB RAM (ouch!), SSD for Cache.

Backup Repositories

You are going to need storage space, there is no way around that. I would plan to have at least one main repository, whether this is a file share, iSCSI disk drive, or even local disk to the VBR server.

Also plan for having additional storage repositories for testing multiple configurations;

  • Linux repo with XFS for block cloning support
    • This could be running on the Linux machine you configure with Veeam Agent as dual use. Just don’t backup the storage device used to store Veeam Backups if you do this!
  • Linux repo with NFS
  • Windows repo with ReFS used as the disk format for block cloning support
    • Consider also running SMB and NFS shares here for the same reasons as above

Sizing: Minimum 2 vCPU cores, 4 GB RAM. Storage is whatever you can afford

AWS and Azure

These again are optional, but it is worth spinning up some free accounts with both to test the restore capabilities. (Azure, AWS) as well as offloading backups to AWS S3 and Azure Blob via Backup Copy Jobs.

Workload considerations

You are going to need some workloads to test the data protection functions of Veeam, otherwise things will be a little boring.

Agent Backups;

  • 2 x Linux Server
    • Min spec: 2 vCPU cores, 2GB RAM
    • You can also use this as a repository for testing as above.
    • Configure NFS to act as NAS source for NAS Backup
  • 2 x Windows Server
    • Min spec: 2 vCPU cores, 2GB RAM
    • You can also use this as a repository for testing as above.
    • Configure SMB and NFS to act as NAS source for NAS Backup

Active Directory

  • Min spec: 2 vCPU cores, 4GB RAM
  • Likely hood is, you already have an AD in your lab environment

SQL Server

  • Min spec: 2 vCPU cores, 4GB RAM
  • Recommended: 4 vCPU cores, 8GB RAM
  • You can populate the databases with data from this Microsoft GitHub
  • This can be a virtual machine, but you can also test the Veeam Agent for SQL backup as well.

Nutanix CE;

  • Want to test more integrations and another hypervisor, go for it!

Virtual Storage Appliances

Several storage vendors (HPE Nimble Storage, NetApp, to name a few) have virtual appliances you can run for testing. If you can get your hands on one of these, it would be good for testing the Veeam Storage integrations, such as storage snapshots, storage replicas and failover capabilities.

Neil Anderson wrote a blog post on the available VSAs.

Sizing recommendations: This depends on the Storage vendor’s appliance but expect them to be hefty.

Final summary

This blog is not a conclusive list, but if you deployed a setup as per the diagram, you’d be able to test the majority of Veeam’s functionality. The performance may leave a little to be desired, but we’d all love a home lab that performs the same as production.

Total resources needed;

  • Minimum: 18 vCPU cores, 34GB RAM
  • Recommended: upwards of 24 vCPU cores, and 40GB RAM

Again, this is a rough figure, if you only run what you need at the exact time, you can cut down on the resources. But ultimately, you’d need a little more as a recommendation to have things perform at a decent level, and so your components don’t fall over running two jobs at once.

You visit this blog post, as a follow up, of the list of features and tasks you can perform from each section of the architecture posted above.



Veeam VMCE 2020 – Beta course and exam

The last week of January for me, was spent taking some annual leave, and attending a special invite only edition of the Veeam VMCE 2020 course, at their offices in Bucharest.

The 3-day course, or event (as I’ll explain), was hosted by Rasmus Haslund and Bart Pellegrino. Several Veeam Solution Architects, Systems Engineers and Channel enablement were present, and finally 8 of the Veeam Vanguard community, which is how I got my ticket into the room. You can follow those members Below;

The course/event

This was run along the lines of “train the trainer” with a large focus on active feedback about the course content, what the current VMCE trainers in the room see the in classroom, and what we see in the field itself during implementation. So, I’d refer to my 3 days as an event rather than a course. Continue reading Veeam VMCE 2020 – Beta course and exam

vRSLCM – Replacing vRA key fails with “Failed to apply License key – LCMVRAVACONFIG590007”

The vRA evaluation license in my homelab had failed, and trying to log in, I was hitting a 402 error.

When replacing the license using vRealize LifeCycle Manager, I received the below errors. This happens because the license key has already expired.

Error Code: LCMVRAVACONFIG590007
Failed to apply License key. Please check whether the license provided is correct and retry.
Failed to get vRA License Key.

The Fix

The fix for this is to re-apply the license using the vRA CLI directly on your vRA node. As per the below commands, and then re-inventory your vRA deployment in vRSLCM and finally Retrust with Identity Manager.

###### To check the current license ######

vracli license

###### To remove the license ######

vracli license remove {license key}

###### To add a new license ###### 

vracli license add {license key}

Below are the options to finalise the configuration in vRSLCM.

The Logs

For those of you who are interested in the log output, and for search engines to track;

Error log from vRSLCM UI as in above screenshot

com.vmware.vrealize.lcm.common.exception.EngineException: Failed to get vRA License Key. at com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaReplaceLicenseTask.execute( at at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at

From the log bundle of vRSLCM

INFO  [pool-2-thread-5] c.v.v.l.d.v.h.VraPreludeInstallHelper -  -- Command to be run : vracli -j license
INFO  [pool-2-thread-5] c.v.v.l.d.v.h.VraPreludeInstallHelper -  -- PRELUDE ENDPOINT HOST :: sc-dc1-vra001.simon.local
INFO  [pool-2-thread-5] c.v.v.l.d.v.h.VraPreludeInstallHelper -  -- COMMAND :: vracli -j license
INFO  [pool-2-thread-5] c.v.v.l.u.SshUtils -  -- Executing command --> vracli -j license
INFO  [pool-2-thread-5] c.v.v.l.u.SshUtils -  -- exit-status: 0
INFO  [pool-2-thread-5] c.v.v.l.u.SshUtils -  -- Command executed sucessfully
INFO  [pool-2-thread-5] c.v.v.l.d.v.h.VraPreludeInstallHelper -  -- Command Status code :: 0 , Output :: {"status_code": 0, "output_data": [{"key": "XXXX-XXXX-XXXX-XXXX", "productName": null, "valid": false, "expirationDate": null, "error": "License expired"}], "error": "", "logs": {"asctime": "2020-01-28T12:55:43Z+0000", "name": "vracli", "processName": "MainProcess", "filename": "", "funcName": "__get_license_result", "levelname": "INFO", "lineno": 325, "module": "license", "threadName": "MainThread", "message": "Running license command: check-serial --serial-number \"XXXX-XXXX-XXXX-XXXX\"", "timestamp": "2020-01-28T12:55:43Z+0000"}}

INFO  [pool-2-thread-5] c.v.v.l.p.c.v.t.VraVaReplaceLicenseTask -  -- Result of fetching License : null
ERROR [pool-2-thread-5] c.v.v.l.p.c.v.t.VraVaReplaceLicenseTask -  -- Failed to get vRA License Key.
INFO  [pool-2-thread-5] c.v.v.l.p.a.s.Task -  -- Injecting task failure event. Error Code : 'LCMVRAVACONFIG590007', Retry : 'true', Causing Properties : '{ CAUSE ::  }' 
com.vmware.vrealize.lcm.common.exception.EngineException: Failed to get vRA License Key.
	at com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaReplaceLicenseTask.execute( [vmlcm-vrapreludeplugin-core-2.1.0-SNAPSHOT.jar!/:?]
	at [vmlcm-engineservice-core-2.1.0-SNAPSHOT.jar!/:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_221]
	at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_221]
	at [?:1.8.0_221]



vRealize LifeCycle Manager – New License – Exception while loading DLF

Adding a new license into vRLCM locker fails with;

Exception while loading DLF. Check /var/log/vlcm for more detail

Sorry I didn’t take a screenshot of this in the UI.

In the log file, you will see the error code;


Over all the logs are not very helpful;

INFO [pool-2-thread-12] c.v.v.l.p.a.s.Task - -- Injecting task failure event. Error Code : 'LCMLICENSINGCONFIG11005', Retry : 'true', Causing Properties : '{ CAUSE :: }' 
com.vmware.vrealize.lcm.plugin.core.licensing.common.exception.ValidateLicensingException: Exception while loading DLF. Check logs for more detail
at com.vmware.vrealize.lcm.plugin.core.licensing.task.ValidateLicenseTask.execute( [vmlcm-licensingplugin-core-2.1.0-SNAPSHOT.jar!/:?]
at [vmlcm-engineservice-core-2.1.0-SNAPSHOT.jar!/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_221]
at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_221]
at [?:1.8.0_221]

The Fix

Reboot the vRLCM appliance.