Tag Archives: openshift

Red Hat OpenShift + VMware Header

OpenShift on VMware – Integrating with vSphere Storage, Networking and Monitoring.

I was honoured to be a guest on the “Ask an OpenShift Admin” webinar recently. Where I had the chance to talk about OpenShift on VMware, always a hot topic, and how we co-innovate and work together on solutions.

You can watch the full session below. Keep reading to see the content I didn’t get to cover on a separate recording I’ve produced.

Ask an OpenShift Admin (Ep 54): OpenShift on VMware and the vSphere Kubernetes Drivers Operator

However, I had a number of topics and demo’s planned, that we never got time to visit. So here is the full content I had prepared.

Some of the areas in this webinar and my additional session we covered were:

  • Answering questions live from the views (anything on the table)
  • OpenShift together with VMware
  • Common issues and best practices for deploying OpenShift on VMware vSphere
  • Consuming your vSphere Storage in OpenShift
  • Integrating with the VMware Network stack
  • Infrastructure Up Monitoring
OpenShift on VMware – Integrating with vSphere Storage, Networking and Monitoring

Resources

Regards

Dean Lewis

vSphere Kubernetes Drivers Operator - Red Hat OpenShift - Header

Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

What is the vSphere Kubernetes Driver Operator (VDO)?

This Kubernetes Operator has been designed and created as part of the VMware and IBM Joint Innovation Labs program. We also talked about this at VMworld 2021 in a joint session with IBM and Red Hat. With the aim of simplifying the deployment and lifecycle of VMware Storage and Networking Kubernetes driver plugins on any Kubernetes platform, including Red Hat OpenShift.

This vSphere Kubernetes Driver Operator (VDO) exposes custom resources to configure the CSI and CNS drivers, and using Go Lang based CLI tool, introduces validation and error checking as well. Making it simple for the Kubernetes Operator to deploy and configure.

The Kubernetes Operator currently covers the following existing CPI, CSI and CNI drivers, which are separately maintained projects found on GitHub.

This operator will remain CNI agnostic, therefore CNI management will not be included, and for example Antrea already has an operator.

Below is the high level architecture, you can read a more detailed deep dive here.

vSphere Kubernetes Drivers Operator - Architecture Topology

Installation Methods

You have two main installation methods, which will also affect the pre-requisites below.

If using Red Hat OpenShift, you can install the Operator via Operator Hub as this is a certified Red Hat Operator. You can also configure the CPI and CSI driver installations via the UI as well.

Alternatively, you can install the manual way and use the vdoctl cli tool, this method would also be your route if using a Vanilla Kubernetes installation.

This blog post will cover the UI method using Operator Hub.

Pre-requisites

Continue reading Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream

kasten vmware red hat openshift header

How to install and configure Kasten to protect container workloads on Red Hat OpenShift and VMware vSphere

In this blog post I’m going to cover deploying and configuring Kasten, the container based enterprise backup software now owned by Veeam Software.

This deployment will be inside my Red Hat OpenShift Environment which is running on top of VMware vSphere.

I’ll be protecting a cool gaming application that has data persistence using MongoDB.

Installing Kasten on Red Hat OpenShift

In this guide, I am going to use Helm, you can learn how to install it here.

Create a OpenShift project (Kubernetes namespace) called “kasten-io”

oc new-project kasten-io

oc new project kasten-io

Next we are going to use Helm to install the Kasten software into our OpenShift cluster.

helm install k10 kasten/k10 --namespace=kasten-io --set scc.create=true --set route.enabled=true --set route.path="/k10" --set auth.tokenAuth.enabled=true

Breaking down the command arguments;

  • –set scc.create=true
    • This creates the correct Security Contexts against the users created by the install. This is needed in OpenShift as the security context stance is higher OOTB than that of a vanilla Kubernetes install.
  • –set route.enabled=true
    • This creates a route in OpenShift using the default ingress, so that the Kasten dashboard is accessible externally. This will use the default cluster ID domain name.
  • –set route.path=”/k10″
    • This sets the route path for the redirection of the dashboard. Without this, your users will need to go to http://{FQDN}/ and append the path to the end (k10).
  • –set auth.tokenAuth.enabled=true

helm install k10 kasten kasten-io Continue reading How to install and configure Kasten to protect container workloads on Red Hat OpenShift and VMware vSphere

VMware Tanzu Mission Control Red Hat OpenShift header

Enabling Tanzu Mission Control Data Protection on Red Hat OpenShift

Just a quick blog on how to get the Data Protection feature of Tanzu Mission Control on Red Hat OpenShift. By default you will find that once the data protection feature is enabled, the pods for Restic component of Velero error.

  • Enable the Data Protection Feature on your Openshift cluster

TMC Cluster Overview enable data protection

  • You will see the UI change to show it’s enabling the feature.

TMC Enabling Data Protection 2

  • You will see the Velero namespace created in your cluster.

TMC oc get projects velero vmware system tmc

However the “Data Protection is being enabled” message in the TMC UI will continue to show without user intervention. If you show the pods for the Velero namespace you will see they error.

This is because OpenShift has a higher security context out of the box for containers than a vanilla Kubernetes environment.

TMC oc get pods restic error crashloopbackoff

The steps to resolve this are the same for a native install of the Project Velero opensource install to your cluster.

  • First we need to add the velero service account to the privileged SCC.
oc adm policy add-scc-to-user privileged -z velero -n velero

TMC oc adm policy add scc to user privileged velero

  • Secondly we need to patch the DaemonSet to allow the containers for Restic run in a privileged mode.
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'

After this, if we run the command to get all pods under the Velero namespace again, we’ll see that they are replaced with the new configuration and running.

TMC oc get pods restic running

Going back to our TMC Console, we’ll see the Data Protection feature is now enabled.

TMC data protection enabled

Regards