Category Archives: VMware

OpenShift

How to Install and configure vSphere CSI Driver on OpenShift 4.x

Introduction

In this post I am going to install the vSphere CSI Driver version 2.0 with OpenShift 4.x, in my demo environment I’m connecting to a VMware Cloud on AWS SDDC and vCenter, however the steps are the same for an on-prem deployment.

I have updated the available configuration files available from Red Hat for installing the CSI Driver in OpenShift, to make them compatible with the latest CSI Driver. You can find these in my GitHub repo;

- Pre-Reqs
- - vCenter Server Role
- - Download the deployment files
- - Create the vSphere CSI secret in OpenShift
- - Create Roles, ServiceAccount and ClusterRoleBinding for vSphere CSI Driver
- Installation
- - Install vSphere CSI driver
- - Verify Deployment
- Create a persistent volume claim
- Using Labels
- Troubleshooting

In your environment, cluster VMs will need “disk.enableUUID” and VM hardware version 15 or higher.

Pre-Reqs
vCenter Server Role

In my environment I will use the default administrator account, however in production environments I recommend you follow a strict RBAC procedure and configure the necessary roles and use a dedicated account for the CSI driver to connect to your vCenter.

To make life easier I have created a PowerCLI script to create the necessary roles in vCenter based on the vSphere CSI documentation;

Download the deployment files

Run the following;

git clone https://github.com/saintdle/vSphere-CSI-Driver-2.0-OpenShift-4.git

Create the vSphere CSI Secret in OpenShift

Edit the file “csi-vsphere-for-OCP.conf” with your vCenter infrastructure details;

[Global]
 
# run the following on your OCP cluster to get the ID
# oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
cluster-id = "OCP_CLUSTER_ID"

[VirtualCenter "VC_FQDN"]
insecure-flag = "true"
user = "USER"
password = "PASSWORD"
port = "443"
datacenters = "VC_DATACENTER"

Create the secret;

oc create secret generic vsphere-config-secret --from-file=csi-vsphere-for-OCP.conf --namespace=kube-system

oc get secret vsphere-config-secret --namespace=kube-system

This configuration is for block volumes, it is also supported to configure access to VSAN File volumes, and you can see an example of the configuration here;

Remove your “csi-vsphere-for-OCP.conf” once the secret is created, as it contains your password in clear text for vCenter.

Create Roles, ServiceAccount and ClusterRoleBinding for vSphere CSI Driver

Continue reading How to Install and configure vSphere CSI Driver on OpenShift 4.x

OpenShift

How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

Install OpenShift 4.x on vSphere 6.x/7.x

The following procedure is intended to create VM’s from an OVA template booting with static IP’s when the DHCP server can not reserve the IP addresses.

The Problem

OCP requires that all DNS configurations be in place. VMware requires that the DHCP assign the correct IPs to the VM. Since many real installations require the coordination with different teams in an organization, many times we don’t have control of DNS, DHCP or Load balancer configurations.

The CoreOS documentation explain how to create configurations using ignition files. I created a python script to put the network configuration using the ignition files created by the openshift-install program.

Reference Architecture

For this guide, we are going to deploy 3 master nodes (control-plane) and 2 worker nodes (compute This guide uses RHEL CoreOS 4.3 as the virtual machine image, deploying Red Hat OCP 4.3, as per the support of N-1 from Red Hat.

We will use a centralised Linux server (Ubuntu) that will perform the following functions;

  • Load Balancer – HAProxy
  • Web Server – Apache2
  • Terraform automation host – version 0.11.14
    • The deployment will be semi-automated using Terraform, so that we can easily build configuration files used by the CoreOS VM’s that have Static IP settings.
    • Using a later version of Terraform will cause failures.
  • Client Tools for OpenShift deployment
    • OC
    • Kubectl
    • Openshift-install

DNS will be provided by a Windows Server.

The installation will use a Bootstrap server to bring the cluster online, which will be removed at the end of the build process.

Deployment Steps

In this guide we will deploy our environment in the following order;

  • Configure DNS
  • Import Red Hat Core OS image into vCenter
  • Deploy Ubuntu Host
    • Configure Apache
    • Configure HAProxy
    • Install Client-Tools
    • Install Terraform
  • Build OpenShift Cluster configuration
  • Configuring the Terraform deployment
  • Running the Terraform deployment
DNS

Openshift uses a “clusterName.BaseDomain” format.

For example; I want to call my Openshift cluster Demo. And my DNS Domain is Simon.local, then my full format used by Openshift is “demo.simon.local”

Below is a table plan of the IP addresses you will use to build the environment.

The last three addresses are cluster level resources that are available on each control-plane node, accessible via the load balancer.

To configure the DNS records in Windows, you can use the Script and CSV file here

In the below screenshot, the script has created the “demo” domain folder and entered my records. It is important that you have PTR records setup for everything apart from the “etcd-X” records.

Import Red Hat CoreOS Image into vCenter

Continue reading How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

VMware Tanzu Mission Control – Getting started with your first cluster

In this blog post we will cover the following topics

- What is Tanzu Mission Control?
- So, this isn't just for VMware environments?
- Getting Started Tanzu Mission Control
- - TMC Resource Hierarchy
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- - - Overview
- - - Nodes
- - - Namespaces
- - - Workloads
- Where can I demo/test/trial this myself?

The follow up blog posts are;

- Tanzu Mission Control 
- - Cluster Inspections
- - - What Inspections are available 
- - - Performing Inspections 
- - - Viewing Inspections
- - Workspaces and Policies
- - - Creating a workspace 
- - - Creating a managed Namespace 
- - - Policy Driven Cluster Management 
- - - Creating Policies

What is Tanzu Mission Control?

Tanzu Mission control is a cloud offering, which gives you a single point of control, monitoring and management, regardless of the Kubernetes deployment and their location (e.g Tanzu Kubernetes Grid, OpenShift Container Platform, Azure Kubernetes to name but a few).

Key Capabilities;

  • Manage Kubernetes Cluster Lifecycle through the deployment and day 2 operations
  • Attach Clusters for centralized operations and management
  • Centralized policy management
    • Apply access, network and container registry policies consistently across your Kubernetes clusters and namespaces
  • Global visibility for diagnosing and troubleshooting issues with your Kubernetes clusters
  • Inspection runbooks to validate the configuration of your clusters
    • Current offerings are;
      • Conformance; validating binaries running in your cluster to ensure proper configuration and running.
      • CIS benchmark; evaluation against the CIS Benchmark for Kubernetes published by the Center for Internet Security.
      • Lite; node conformance test to validate your nodes meet the Kubernetes requirements.

So, this isn’t just for VMware environments?

Nope, this is a cloud and Kubernetes neutral offering. You can attach CNCF conformant Kubernetes clusters to Tanzu Mission Control no matter where they are running: on vSphere, in any public clouds, or through other Kubernetes vendors.

Getting Started Tanzu Mission Control

TMC Resource Hierarchy

In the Tanzu Mission Control resource hierarchy, there are three levels at which you can specify policies.

  • Organization
  • Object groups (Cluster groups and Workspaces)
  • Kubernetes objects (Clusters and Namespaces)

You can set direct policies for a given object, but each object can also inherit based on the parent objects. So pretty much what you’ve been used to in the past with policies and hierarchies.

Creating a Cluster Group

A Cluster Group is a logical object to bring together multiple Kubernetes clusters. You can set user access policies to be able to view/edit/control cluster group objects and their child objects (clusters).

Cluster groups provide an infrastructure view, and all clusters must be attached to a group.

To create a Cluster Group;

  • Select the Cluster Group from the navigation
  • Click New Cluster Group
  • Supply a name, description and labels are optional and can be edited after creation

Continue reading VMware Tanzu Mission Control – Getting started with your first cluster

VMware Tanzu Mission Control – Workspaces and Policies

In this blog post we will cover the following topics

- Tanzu Mission Control 
- - Workspaces 
- - - Creating a workspace
- - - Creating a managed Namespace
- - - Viewing a managed Namespace
- - Policy Driven Cluster Management
- - - Creating a Image Registry Policy
- - - Creating a Network Policy

The follow up blog posts are;

- Getting Started Tanzu Mission Control
- - TMC Resource Hierarchy
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- Cluster Inspections
- - Cluster Inspections Overview 
- - What Inspections are available 
- - Performing Inspections 
- - Viewing Inspections

Workspaces

Workspaces provide an application view, where you logically group Kubernetes Namespaces together, regardless of the cluster to which they are attached.

This is in contrast to Cluster Groups, which are focused on the infrastructure view.

These Workspaces can be created to align to your projects or applications, from a hierarchy point of view, you would then authorize your users to these Workspaces, so that they can monitor and manage the namespaces related to their function.

Creating a Workspace

Click the Workspace navigation view on the left-hand side, and then New Workspace.

Specify your Workspace name, and provide the optional description and labels, these can be added after creation if needed.

Now you have a Workspace, it’s no good without any associated Namespaces, so let’s continue.

Creating a managed Namespace

All Namespaces attached to a Workspace will be managed Namespaces under TMC.

To create a managed Namespace, you can do this in one of four places;

  • Within the Workspace Navigation view
  • Inside the Workspace Object itself
  • On the Namespace Navigation view
  • On the Cluster Object > Navigation Tab

Continue reading VMware Tanzu Mission Control – Workspaces and Policies

VMware Tanzu Mission Control – Cluster Inspections

In this blog post we will cover the following topics

- Tanzu Mission Control 
- - Cluster Inspections Overview
- - What Inspections are available
- - Performing Inspections
- - Viewing Inspections

The follow up blog posts are;

- Getting Started Tanzu Mission Control
- - TMC Resource Hierarchy
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- Workspaces and Policies
- - Creating a workspace 
- - - Creating a managed Namespace 
- - - Viewing a managed Namespace 
- - Policy Driven Cluster Management 
- - - Creating an Image Registry Policy 
- - - Creating a Network Policy

Cluster Inspections Overview

This for me is one of the best features of Tanzu Mission Control, and an area which I expected will be developed further in the future.

Cluster inspections provide a point-in-time report of the condition of the cluster, you might want to run them periodically (to avoid drifting out of conformance) and any time you make significant alterations, such as after you patch or upgrade a cluster.

This capability is achieved by using Sonobuoy, an open source community standard, which provides diagnostics of your Kubernetes environments through conformance testing and additional plugins.

What Inspections are available?

The following cluster inspections are available from the Overview and Inspection tabs of the cluster detail page in the Tanzu Mission Control console.

  • Conformance inspection;

Validates the binaries running on your cluster and ensures that your cluster is properly installed, configured, and working. You can view the generated report from within Tanzu Mission Control to assess and address any issues that arise. For more information, see the Kubernetes Conformance documentation at https://github.com/cncf/k8s-conformance/tree/master/docs.

  • CIS benchmark inspection;

Evaluates your cluster against the CIS Benchmark for Kubernetes published by the Center for Internet Security.

  • Lite inspection;

Is a node conformance test that validates whether nodes meet requirements for Kubernetes. For more information, see Validate node setup in the Kubernetes documentation.

Performing Inspections

To perform an inspection, there are two ways; from the inspections tab when view a cluster object (as in the above screenshot).

Or you can do this from the Inspections navigation page, as below.

Continue reading VMware Tanzu Mission Control – Cluster Inspections