Tag Archives: Storage

vSphere Kubernetes Drivers Operator - Red Hat OpenShift - Header

Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

What is the vSphere Kubernetes Driver Operator (VDO)?

This Kubernetes Operator has been designed and created as part of the VMware and IBM Joint Innovation Labs program. We also talked about this at VMworld 2021 in a joint session with IBM and Red Hat. With the aim of simplifying the deployment and lifecycle of VMware Storage and Networking Kubernetes driver plugins on any Kubernetes platform, including Red Hat OpenShift.

This vSphere Kubernetes Driver Operator (VDO) exposes custom resources to configure the CSI and CNS drivers, and using Go Lang based CLI tool, introduces validation and error checking as well. Making it simple for the Kubernetes Operator to deploy and configure.

The Kubernetes Operator currently covers the following existing CPI, CSI and CNI drivers, which are separately maintained projects found on GitHub.

This operator will remain CNI agnostic, therefore CNI management will not be included, and for example Antrea already has an operator.

Below is the high level architecture, you can read a more detailed deep dive here.

vSphere Kubernetes Drivers Operator - Architecture Topology

Installation Methods

You have two main installation methods, which will also affect the pre-requisites below.

If using Red Hat OpenShift, you can install the Operator via Operator Hub as this is a certified Red Hat Operator. You can also configure the CPI and CSI driver installations via the UI as well.

Alternatively, you can install the manual way and use the vdoctl cli tool, this method would also be your route if using a Vanilla Kubernetes installation.

This blog post will cover the UI method using Operator Hub.

Pre-requisites

Continue reading Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

kubestr header

Kubestr – Open-Source Kubernetes Storage benchmarking tool

Kubernetes is a platform which is starting to lose the need for introduction in most settings. However, it still can be a complex beast to get to grips with, and getting your infrastructure components configured correctly is key to providing a successful Kubernetes environment for your applications.

One of these such areas is storage.

With your Kubernetes platform, you need to ensure a correct storage configuration, and benchmark your storage performance, like you would with any other platform. Then test the container storage features such as snapshots of the persistent volumes.

The configuration for each vendor when integrating with Kubernetes will be different, but the outcomes should be the same.

What is Kubestr?

Enter Kubestr, the Open-Source project tool from Kasten by Veeam, designed to help with ensuring your storage is configured correctly, help you benchmark the performance and test features such as snapshots.

Getting started with Kubestr

Simply Download Kubestr for the platform you wish to run the tool from. For me I’ll be running it from my Mac OS X machine, which has connectivity to my Kubernetes platform (AWS EKS, I used this blog to create it).

I extracted the zip file and have the Kubestr command line tool available in the output folder.

kubestr download and extract

Running the tool for the first time will run some tests and output a number of useful items of information on how we can use the tool. Which I will start to breakdown as we continue.

For Kubestr to run, it will use the active context in your kubectl configuration file.

kubestr first run

Green Box – we have our initial checks.

  • Kubernetes version
  • RBAC check
  • Kubernetes Aggregated layer check

Then we have the details from our available storage provisioners that are installed for our cluster. You can see what I have two installed. Continue reading Kubestr – Open-Source Kubernetes Storage benchmarking tool

Veeam Nimble Storage Integration Banner

First Look – Leveraging the Nimble Secondary Flash Array with Veeam – Setup guide

Following on with the setup guide of the Nimble Secondary Flash Array, I am going to go through the deployment options, and the settings needed for implementation with Veeam Backup and Replication.

What will be covered in this blog post?

  • Quick overview of the SFA
  • Deployment Options
    • Utilizing features of Veeam with the SFA
    • Using a backup repository LUN
  • Best practices to use as backup repository
    • Veeam Proxy – Direct SAN Access
    • Creating your LUN on the SFA for use as a backup repository
    • Setting up your backup repository in Veeam
    • vPower NFS Service on the mount server
    • Backup Job settings
    • SureBackup / SureReplica
    • Backup Job – Nimble Storage Primary Snapshot – Configure Secondary destinations for this job
    • Encryption – Don’t do it in Veeam!
  • Viewing data reduction savings on the Nimble Secondary Storage
  • Summary

My test lab looks similar to the below diagram provided by Veeam (Benefits of using Nimble SFA with Veeam).

Nimble Storage Veeam Architecture diagram

Quick overview of the SFA

The SFA is essentially the same as the previous Nimble Storage devices before it, the same hardware and software. But with one key difference, the software has been optimized against data reduction and space-saving efficiencies, rather than for performance. Which means you would purchase the Nimble CS/AF range for production workloads, with high IOP performance and low latency. And the SFA would be used for your DR environment, backup solution, providing the same low latency to allow for high-speed recovery, and long-term archival of data.

Deployment options

With the deployment of an SFA, you are looking at roughly the same deployment options as the CS/AF array for use with Veeam (This blog, Veeam Blog). However with the high dedupe expectancy, you are able to store a hell of a lot more data!

So the options are as follows;

  1. iSCSI or FC LUN to your server as a Veeam Backup Repo.
    • Instant VM Recovery
    • Backup Repository
    • SureBackup / SureReplica
    • Virtual Labs
  2. Replication Target for an existing Nimble.
    • Utilizing Veeam Storage Integration
      • Backup VMs from Secondary Storage Snapshot
      • Control Nimble Storage Snapshot schedules and replication of volumes

If we take option one, we open up a few options directly with Veeam. You can use the high IOPs performance and low latency, for features such as Instant VM recovery, where by the Veeam Backup and Replication server hosts an NFS datastore to your virtual environment and spins up a running copy of your recovered virtual machine quickly with little fuss.

Veeam Instant VM Recovery Continue reading First Look – Leveraging the Nimble Secondary Flash Array with Veeam – Setup guide

PowerCLI

PowerCLI – Setup Host networking and storage ready for ISCSI LUNs

So I am no scripting master, my PowerShell knowledge is still something I want to expand. During an install last week I had a number of hosts to setup from scratch, so I decided to do this via PowerCLI, as a lot of the tasks were repetitive. Setting up the vSwitch networking and iSCSI configuration for each host

For those of you new to scripting, I’ve included screenshots to accompany the commands so you can see whats going on in the GUI.

Note: the full code without the breaks is at the end of this post

#Setup which host to target 
$VMhost = 'hostname'

Continue reading PowerCLI – Setup Host networking and storage ready for ISCSI LUNs