Scroll to the bottom of the page, and select “Add/Upgrade”
Select the pak file for installation and follow the wizard.
Create a CSP API Token
For the vROPs management pack adapter to be able to communicate with TMC, we need an API token.
Log into https://console.cloud.vmware.com
Change to correct organisation that contains your TMC instance
Click your name in the top right hand corner and select “My Account”
Select the “API Tokens” tab, and then “Generate a new API Token” button.
Set your API Token name, expiry, and access control as required. Then click the generate button.
You will be shown a dialog box with your generated token. Save this in a safe space we will use it later on.
Connect vRealize Operations management pack adapter to Tanzu Mission Control
In vROPs UI go to Administration > Under Solutions, choose “Other Accounts” and click the “Add account” button.
From the account type list, choose Tanzu Mission Control.
Fill out the necessary details on the New Account screen.
For the credential click the + symbol, add in a name for the credential, and the CSP token you created earlier.
Select your newly created credential.
Select the validate button.
Hopefully you get a successful message.
You will see the account object in the Other Accounts view.
Auto-Discovering Tanzu Kubernetes Grid Clusters
Now you have your account added, whenever you provision a new cluster using Tanzu Mission Control, cAdvisor will be configured in the Kubernetes cluster and a Kubernetes account type will be created in vROps automatically for you.
Below I’ve created a cluster in AWS, and we can see the object has been created in vROPs.
And finally, here is my cluster showing in the one of the Kubernetes Dashboards.
This is a simple to implement feature but can make a massive difference in your ability to monitor your TKG clusters from the infrastructure view that vROPs provides. As your users create clusters via TMC, they don’t need to interact with the monitoring platform to ensure visibility.
The SFA is essentially the same as the previous Nimble Storage devices before it, the same hardware and software. But with one key difference, the software has been optimized against data reduction and space-saving efficiencies, rather than for performance. Which means you would purchase the Nimble CS/AF range for production workloads, with high IOP performance and low latency. And the SFA would be used for your DR environment, backup solution, providing the same low latency to allow for high-speed recovery, and long-term archival of data.
With the deployment of an SFA, you are looking at roughly the same deployment options as the CS/AF array for use with Veeam (This blog, Veeam Blog). However with the high dedupe expectancy, you are able to store a hell of a lot more data!
So the options are as follows;
iSCSI or FC LUN to your server as a Veeam Backup Repo.
Instant VM Recovery
SureBackup / SureReplica
Replication Target for an existing Nimble.
Utilizing Veeam Storage Integration
Backup VMs from Secondary Storage Snapshot
Control Nimble Storage Snapshot schedules and replication of volumes
If we take option one, we open up a few options directly with Veeam. You can use the high IOPs performance and low latency, for features such as Instant VM recovery, where by the Veeam Backup and Replication server hosts an NFS datastore to your virtual environment and spins up a running copy of your recovered virtual machine quickly with little fuss.
Controlling Nimble Storage snapshots and restoring files from the Veeam console
Backing up a Virtual Machine from a Nimble Snapshot
Backing up a Virtual machine to a Nimble Snapshot (Snapshot-only Job)
Replicating a Virtual Machine from a Nimble Snapshot
SureBackup from Nimble Snapshots
Following on from part one of this first look two part blog series, where we added the Nimble Storage Arrays into the Veeam software, we continue to see how this integration piece works.
Now we have added the Nimble Storage Array
So before we get started, we can now see the datastores of the Nimble Storage Array, and the snapshots of each datastore. In the second screenshot, we can see the enumeration of VMware virtual machines and which host they are were attached to.
As a part of the Veeam Vanguard program, I have been given access to the beta version of Veeam 9.5. And in this blog post I will cover some of the integration components between Nimble Storage and Veeam, announced by Veeam back in April. If you have been following the Veeam forums, then you’ll know that there is a very active post where the forum users are all pitching in which storage vendor they think should be next to get integration.
Note: This blog post content is created from a beta version of Veeam 9.5, any features, dialog boxes, names and such are subject to change before the final public release.
What I’ll be covering in these series of blog posts;
Veeam's advanced integration with Nimble Storage provides additional protection and recovery options that are not available without direct integration and joint development efforts that provide the ability to:
- Schedule the creation of Nimble storage snapshots containing application-consistent VM images, and storage snapshot replication orchestration.
- Restore from Nimble storage snapshots or their Replicated Copies (entire VM, guest files and application items)
- Backup from Nimble storage snapshots or their Replicated Copies.
The only other company to have this kind of integration is NetApp, showing this high regards Nimble Storage are held in from Veeam. But also showing that Nimble Storage has been a dedicated partner working with Veeam over the past few years, and its paying off with this integration offering.
The test environment
Luckily for me, Veeam have produced a reference architecture diagram which pretty much describes the test environment used in this preview blog post. In this environment there will be no Tape nor cloud connect components.