In my previous post, I looked at the architecture you may want to implement to test the various features of Veeam Backup and Replication, including features from v10.
I thought it would be a good idea to break down the architecture into sections, and provide some ideas of what features/configurations can be tested in each section. This of course is not an exhaustive list.
I’ve broken down the original diagram into 5 Sections.
Section 1 – VMware Cluster Continue reading Building a Veeam Lab – Testing Scenarios
This blog posts spawns from an interesting discussion between the Veeam Vanguard members on what components are needed to build an effective lab for testing out most of the Veeam features, especially with v10 around the corner. So, I’ve put together something that should hopefully work for this;
I’m not really going to focus on the platform you’ll be running this on, as you should already have some sort of home lab if you are looking to run a Veeam lab, I’d assume.
- AD and DNS – I’m just going to presume you have this up and running already.
- If you are trying to cut down your home lab infrastructure, this blog on running PhotonOS as DNS server and NTP server is helpful
- vSphere Environment
- Cluster can be as big as you like or already have as your homelab
- Standalone host for replica’s, or just target one of your existing hosts directly in the replica configuration
- Hyper-V standalone host
- Just for running one or two Virtual machines. But if you are a Hyper-v admin, you probably already have a lab you can use.
- Backup Repository
- There a few options you can use, but you need to consider where you store your data, if it is on the vSphere environment itself, you may run out of storage fast. An External NAS would be best.
I have wrote a second blog, which covers the testing scenarios that the below architecture covers.
Veeam Backup and Replication Server:
This is going to be your main virtual machine, and you can multi-home a few components on here, especially if you are not fussed about the performance.
For the database, you should be OK with the built in SQL Express install.
- 2 vCPU cores, 8 GB RAM, HDD space 60GB (inclusive of Logs, vPowerNFS, VBR software)
- Recommendations for sizing;
1 vCPU core (physical or virtual) and 4 GB RAM per 10 concurrently running jobs.
Continue reading Building a Veeam Lab – a recommended architecture
The last week of January for me, was spent taking some annual leave, and attending a special invite only edition of the Veeam VMCE 2020 course, at their offices in Bucharest.
The 3-day course, or event (as I’ll explain), was hosted by Rasmus Haslund and Bart Pellegrino. Several Veeam Solution Architects, Systems Engineers and Channel enablement were present, and finally 8 of the Veeam Vanguard community, which is how I got my ticket into the room. You can follow those members Below;
This was run along the lines of “train the trainer” with a large focus on active feedback about the course content, what the current VMCE trainers in the room see the in classroom, and what we see in the field itself during implementation. So, I’d refer to my 3 days as an event rather than a course. Continue reading Veeam VMCE 2020 – Beta course and exam
Today I decided to deploy VeeamPN between two sites. This is a free VPN solution which is optimized for performance. Veeam produced this tool for their customers to be able to easily setup networking between their production site and DR site, so ensure continuity during a disaster or failover situation.
Below is a diagram of my basic setup.
- Site A – runs the “Network Hub” role
- Site B – runs the “Site Gateway” role
When I deployed the first OVA appliance, I realised there was no option for setting a static IP address. DHCP is a requirement to configure VeeamPN. However, when the OVA deployed and the initial configuration for Network Hub is selected, there is no static IP address settings available versus an OVA configured for the Site Gateway rule.
The VeeamPN OVA is a stripped-down Ubuntu Linux image, which runs Netplan for the networking service.
I configured a static IP address the following way;
- Configure SSH access on the VeeamPN appliance via the management interface.
- Use WinSCP to connect to the appliance
- Browse to /etc/netplan/
- Edit the “01-netplan.yaml’ file and save (see below).
- SSH to the VeeamPN Appliance and run “sudo netplan apply” or “sudo netplan –debug apply” for troubleshooting
- Log back onto the management interface using the new IP address.
When you edit the YAML file, you will find that indentations are key (as with any YAML file).
To make life easier, I used this file found here that you can use as the baseline;
search: [mydomain, otherdomain]
addresses: [10.10.10.1, 126.96.36.199]
Following on with the setup guide of the Nimble Secondary Flash Array, I am going to go through the deployment options, and the settings needed for implementation with Veeam Backup and Replication.
What will be covered in this blog post?
- Quick overview of the SFA
- Deployment Options
- Utilizing features of Veeam with the SFA
- Using a backup repository LUN
- Best practices to use as backup repository
- Veeam Proxy – Direct SAN Access
- Creating your LUN on the SFA for use as a backup repository
- Setting up your backup repository in Veeam
- vPower NFS Service on the mount server
- Backup Job settings
- SureBackup / SureReplica
- Backup Job – Nimble Storage Primary Snapshot – Configure Secondary destinations for this job
- Encryption – Don’t do it in Veeam!
- Viewing data reduction savings on the Nimble Secondary Storage
My test lab looks similar to the below diagram provided by Veeam (Benefits of using Nimble SFA with Veeam).
Quick overview of the SFA
The SFA is essentially the same as the previous Nimble Storage devices before it, the same hardware and software. But with one key difference, the software has been optimized against data reduction and space-saving efficiencies, rather than for performance. Which means you would purchase the Nimble CS/AF range for production workloads, with high IOP performance and low latency. And the SFA would be used for your DR environment, backup solution, providing the same low latency to allow for high-speed recovery, and long-term archival of data.
With the deployment of an SFA, you are looking at roughly the same deployment options as the CS/AF array for use with Veeam (This blog, Veeam Blog). However with the high dedupe expectancy, you are able to store a hell of a lot more data!
So the options are as follows;
- iSCSI or FC LUN to your server as a Veeam Backup Repo.
- Instant VM Recovery
- Backup Repository
- SureBackup / SureReplica
- Virtual Labs
- Replication Target for an existing Nimble.
- Utilizing Veeam Storage Integration
- Backup VMs from Secondary Storage Snapshot
- Control Nimble Storage Snapshot schedules and replication of volumes
If we take option one, we open up a few options directly with Veeam. You can use the high IOPs performance and low latency, for features such as Instant VM recovery, where by the Veeam Backup and Replication server hosts an NFS datastore to your virtual environment and spins up a running copy of your recovered virtual machine quickly with little fuss.
Continue reading First Look – Leveraging the Nimble Secondary Flash Array with Veeam – Setup guide