Monthly Archives: June 2017

Veeam Nimble Storage Integration Banner

First Look – Leveraging the Nimble Secondary Flash Array with Veeam – Setup guide

Following on with the setup guide of the Nimble Secondary Flash Array, I am going to go through the deployment options, and the settings needed for implementation with Veeam Backup and Replication.

What will be covered in this blog post?

  • Quick overview of the SFA
  • Deployment Options
    • Utilizing features of Veeam with the SFA
    • Using a backup repository LUN
  • Best practices to use as backup repository
    • Veeam Proxy – Direct SAN Access
    • Creating your LUN on the SFA for use as a backup repository
    • Setting up your backup repository in Veeam
    • vPower NFS Service on the mount server
    • Backup Job settings
    • SureBackup / SureReplica
    • Backup Job – Nimble Storage Primary Snapshot – Configure Secondary destinations for this job
    • Encryption – Don’t do it in Veeam!
  • Viewing data reduction savings on the Nimble Secondary Storage
  • Summary

My test lab looks similar to the below diagram provided by Veeam (Benefits of using Nimble SFA with Veeam).

Nimble Storage Veeam Architecture diagram

Quick overview of the SFA

The SFA is essentially the same as the previous Nimble Storage devices before it, the same hardware and software. But with one key difference, the software has been optimized against data reduction and space-saving efficiencies, rather than for performance. Which means you would purchase the Nimble CS/AF range for production workloads, with high IOP performance and low latency. And the SFA would be used for your DR environment, backup solution, providing the same low latency to allow for high-speed recovery, and long-term archival of data.

Deployment options

With the deployment of an SFA, you are looking at roughly the same deployment options as the CS/AF array for use with Veeam (This blog, Veeam Blog). However with the high dedupe expectancy, you are able to store a hell of a lot more data!

So the options are as follows;

  1. iSCSI or FC LUN to your server as a Veeam Backup Repo.
    • Instant VM Recovery
    • Backup Repository
    • SureBackup / SureReplica
    • Virtual Labs
  2. Replication Target for an existing Nimble.
    • Utilizing Veeam Storage Integration
      • Backup VMs from Secondary Storage Snapshot
      • Control Nimble Storage Snapshot schedules and replication of volumes

If we take option one, we open up a few options directly with Veeam. You can use the high IOPs performance and low latency, for features such as Instant VM recovery, where by the Veeam Backup and Replication server hosts an NFS datastore to your virtual environment and spins up a running copy of your recovered virtual machine quickly with little fuss.

Veeam Instant VM Recovery Continue reading First Look – Leveraging the Nimble Secondary Flash Array with Veeam – Setup guide

Nimble Secondary Flash Array Banner

Setting up a Nimble Secondary Flash Array from scratch

Nimble Storage have released their newest addition in the line up, the SFA, or to give its full name, the Secondary Flash Array. And in this post, we are going to look at how to set one up from scratch.

Taken from the following datasheet;

The Nimble Secondary Flash Array represents a new type of secondary data storage optimized for both capacity and performance. It adds high-performance flash storage to a capacity-optimized architecture for a unique backup platform that lets you put your backup data to work.

The Nimble Secondary Flash Array is optimized for backup, disaster recovery and secondary data storage. By using Flash, it lets you put your backup data to work for Dev/Test, QA and analytics. Instantly backup and recover data from any primary storage system. And our integration with Veeam backup software simplifies data lifecycle management and provides a path to cloud archiving.
Before you get started

As you can imagine it’s as easy as setting up one of the existing Nimble Arrays, as I blogged about previous (Setup via GUI, via CLI). Actually the configuration via CLI is the exact same!

First things to note; the SFA ships with the NimOS 4.x, which is now HTML5 based, and there is extra port requirement for access if you have a firewall or a web proxy in the situ, TCP 5392, which is used for RESTapi access. In my testing, I found that the Sophos web filter that was setup in transparent mode, caused issues with my login page on the Nimble, when I removed it from the equation, I noticed my Firefox gave me a pop-up window as per the below.

Nimble SFA port 5392 e1498081602430

How to setup the array – initial configuration
  • Launch the Nimble Setup Manager, this can be downloaded from https://infosight.nimblestorage.com

You’ll see below I actually used an older version, and it still worked fine discovering the array. When you click next, you’ll be presented a dialog box explaining that your default browser will be launched to continue the setup (as part of the new HTML5 interface).

Nimble Setup Manager connect to SFA Nimble Setup Manager Launch web browser to connect to SFA

 

  • Accept the certificate error, as the Nimble uses a self signed cert on the web interface

Continue reading Setting up a Nimble Secondary Flash Array from scratch

VMware vSphere 6.5 Host Resources Deep Dive proof copies

Now available – VMware vSphere 6.5 Host Resources Deep Dive

It’s here!!!

I am sure many of you have been following this technical book closely, the latest publication by Frank Denneman. And now its available to order!!!!  There is already a fantastic opening statement wrote by Duncan Epping today as the book is officially published.

The book focuses on four key physical host component areas, and doesn’t touch VMware software features in-depth such as HA and DRS, only the host resources;

  • CPU
  • Memory
  • Storage
  • Network

As can be seen by the striking cover below.

VMware vSphere 6.5 Host Resources Deep Dive front cover

If you haven’t already, I recommend that you give the @Hostdeepdive twitter account a follow, or at least a browse through to see some of the snippets released.

If you have read the VMware vSphere Clustering Technical Deepdive books, you already know what to expect in terms of technical level of content, however the sneak peaks show that Frank and Niels have gone even deeper than you can imagine.

See the below example,

I’ve been tracking this publication closely, and wished I could have been a reviewer, but i now have my copy on order, so expect a review soon!

Where to buy?

Below are the links from everyone’s favourite retailer to purchase the paperback copy, I believe that eBook available should be after VMworld;

Amazon Book Blurb

I couldn’t really write this up better than whats on amazon;

 The VMware vSphere 6.5 Host Resources Deep Dive is a guide to building consistent high-performing ESXi hosts. A book that people can’t put down. Written for administrators, architects, consultants, aspiring VCDX-es and people eager to learn more about the elements that control the behavior of CPU, memory, storage and network resources.

This book shows that we can fundamentally and materially improve the systems we’re building. We can make the currently running ones consistently faster by deeply understanding and optimizing our systems.

The reality is that specifics of the infrastructure matter. Details matter. Especially for distributed platforms which abstract resource layers, such as NSX and vSAN. Knowing your systems inside and out is the only way to be sure you’ve properly handled those details. It’s about having a passion for these details. It’s about loving the systems we build. It’s about understanding them end-to-end.

This book explains the concepts and mechanisms behind the physical resource components and the VMkernel resource schedulers, which enables you to:

    Optimize your workload for current and future Non-Uniform Memory Access (NUMA) systems.
    Discover how vSphere Balanced Power Management takes advantage of the CPU Turbo Boost functionality, and why High Performance does not.
    How the 3-DIMMs per Channel configuration results in a 10-20% performance drop.
    How TLB works and why it is bad to disable large pages in virtualized environments.
    Why 3D XPoint is perfect for the vSAN caching tier.
    What queues are and where they live inside the end-to-end storage data paths.
    Tune VMkernel components to optimize performance for VXLAN network traffic and NFV environments.
    Why Intel's Data Plane Development Kit significantly boosts packet processing performance.

Finally to round off, here is one of my recent favourite tweets from Frank, that I’ve also been sharing around work.

Regards

Dean