Tag Archives: openshift

OpenShift

Red Hat OpenShift on VMware vSphere – How to Scale and Edit your cluster deployments

Working with Red Hat OpenShift on vSphere, I’m really starting to understand the main infrastructure components and how everything fits together.

Next up was understanding how to control the cluster size after initial deployment. So, with Red Hat OpenShift, there are some basic concepts we need to understand first, before we jump into the technical how-to’s below in this blog.

In this blog I will cover the following;

- Understanding the concepts behind controlling Machines in OpenShift
- Editing your MachineSet to control your Virtual Machine Resources
- Editing your MachineSet to scale your cluster manually
- Deleting a node
- Configuring ClusterAutoscaler to automatically scale your environment

Machine API

The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.

The Machine API performs all node host provisioning management actions as a post cluster installation method, providing you dynamic provisioning on top of your VMware vSphere platform (and other public/private cloud platforms).

The two primary resources are:

Machines
An object that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
MachineSets
Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.

These custom resources add capabilities to your OpenShift cluster:

MachineAutoscaler
This resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.
ClusterAutoscaler
This resource is based on the upstream ClusterAutoscaler project. In the OpenShift Container Platform implementation, this is integrated with the Machine API by extending the MachineSet API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, etc. You can configure priorities so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the ScalingPolicy, so that for example, you can scale up nodes but not scale down the node count.

MachineHealthCheck

This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, creates a new machine. You can read more here about this technology preview feature in OCP 4.6.

Editing your MachineSet to control your Virtual Machine Resources

To view the current MachineSet objects available run; Continue reading Red Hat OpenShift on VMware vSphere – How to Scale and Edit your cluster deployments

OpenShift

Using the vSphere CSI Driver with OpenShift 4.x and VSAN File Services

You may have seen my blog post “How to Install and configure vSphere CSI Driver on OpenShift 4.x“.

Here I updated the vSphere CSI driver to work the additional security constraints that are baked into OpenShift 4.x.

Since then, once of the things that has been on my list to test is file volumes backed by vSAN File shares. This feature is available in vSphere 7.0.

Well I’m glad to report it does in fact work, by using my CSI driver (see above blog or my github), you can simply deploy consume VSAN File services, as per the documentation here. 

I’ve updated my examples in my github repository to get this working.

OK just tell me what to do…

First and foremost, you need to add additional configuration to the csi conf file (csi-vsphere-for-ocp.conf).

If you do not, the defaults will be assumed which is full read-write access from any IP to the file shares created.

[Global]

# run the following on your OCP cluster to get the ID 
# oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
cluster-id = c6d41ba1-3b67-4ae4-ab1e-3cd2e730e1f2

[NetPermissions "A"]
ips = "*"
permissions = "READ_WRITE"
rootsquash = false

[VirtualCenter "10.198.17.253"]
insecure-flag = "true"
user = "administrator@vsphere.local"
password = "Admin!23"
port = "443"
datacenters = "vSAN-DC"
targetvSANFileShareDatastoreURLs = "ds:///vmfs/volumes/vsan:52c229eaf3afcda6-7c4116754aded2de/"

Next, create a storage class which is configured to consume VSAN File services.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: file-services-sc
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
csi.storage.k8s.io/fstype: "nfs4" # Optional Parameter

Then create a PVC to prove it works. Continue reading Using the vSphere CSI Driver with OpenShift 4.x and VSAN File Services

OpenShift

How to Install and configure vSphere CSI Driver on OpenShift 4.x

Note: This blog post was updated in February 2021 to use the new driver manifests from the Official VMware CSI Driver repository, which now provides support for OpenShift.

Introduction

In this post I am going to install the vSphere CSI Driver version 2.1.0 with OpenShift 4.x, in my demo environment I’m connecting to a VMware Cloud on AWS SDDC and vCenter, however the steps are the same for an on-prem deployment.

We will be using the vSphere CSI Driver which now supports OpenShift.

- Pre-Reqs
- - vCenter Server Role
- - Download the deployment files
- - Create the vSphere CSI secret in OpenShift
- - Create Roles, ServiceAccount and ClusterRoleBinding for vSphere CSI Driver
- Installation
- - Install vSphere CSI driver
- - Verify Deployment
- Create a persistent volume claim
- Using Labels
- Troubleshooting

In your environment, cluster VMs will need “disk.enableUUID” and VM hardware version 15 or higher.

Pre-Reqs
vCenter Server Role

In my environment I will use the default administrator account, however in production environments I recommend you follow a strict RBAC procedure and configure the necessary roles and use a dedicated account for the CSI driver to connect to your vCenter.

To make life easier I have created a PowerCLI script to create the necessary roles in vCenter based on the vSphere CSI documentation;

Download the deployment files

Run the following;

git clone https://github.com/saintdle/vSphere-CSI-Driver-2.0-OpenShift-4.git

Create the vSphere CSI Secret + CPI ConfigMap in OpenShift

Edit the two files “csi-vsphere.conf” + “vsphere.conf” with your vCenter infrastructure details. These two files may have the same information in them, but in the example of using VSAN File Services, then you may include further configuration in your CSI conf file, as an example.

[Global]
 
# run the following on your OCP cluster to get the ID
# oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
#Your OCP cluster name provided below can just be a human readable name but needs to be unique when running different OCP clusters on the same vSphere environment.
cluster-id = "OCP_CLUSTER_ID"

[VirtualCenter "VC_FQDN"]
insecure-flag = "true"
user = "USER"
password = "PASSWORD"
port = "443"
datacenters = "VC_DATACENTER"

Create the CSI secret + CPI configmap;

oc create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system

oc create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system

To validate:
oc get secret vsphere-config-secret --namespace=kube-system
oc get configmap cloud-config --namespace=kube-syste

This configuration is for block volumes, it is also supported to configure access to VSAN File volumes, and you can see an example of the configuration here;

Remove the two local .conf files form your machine once the secret is created, as it contains your password in clear text for vCenter.

Installation
Install the vSphere CPI

Taint all OpenShift Nodes.

kubectl taint nodes --all 'node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule'

Install the vSphere CPI (RBAC, Bindings, DaemonSet)

oc apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml

oc apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml

oc apply -f https://github.com/kubernetes/cloud-provider-vsphere/raw/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml

You can verify the installation by viewing the providerID for the nodes, which must reference “vSphere”.

oc describe nodes | grep "ProviderID"

Install vSphere CSI driver

The driver is made up of the following components

  • CSI Controller runs as a Kubernetes deployment, with a replica count of 1.
  • For version v2.1.0, the vsphere-csi-controller Pod consists of 6 containers
    • CSI controller, External Provisioner, External Attacher, External Resizer, Liveness probe and vSphere Syncer.
Note: This example shows the newer driver manifests for vSphere 7.0 U1. 
Use the correct vSphere version manifests as per this link.

Create the CSI artifacts.

oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml

oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml

oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml
Verify the deployment

You can verify the deployment with the two below commands

oc get deployments --namespace=kube-system

oc get CSINode
Creating a Storage Class that uses the CSI-Driver

Create a storage class to test the deployment. As I am using VMC as my test environment, I must use some additional optional parameters to ensure that I use the correct VSAN datastore (WorkloadDatastore). You can visit the references below for more information.

In the VMC vCenter UI, you can get this by going to the Datastore summary page.

To get my datastore URL I need to reference, I will use PowerCLI

get-datastore work* | Select -ExpandProperty ExtensionData | select -ExpandProperty Info

I’m going to create my StorageClass on the fly, but you can find my example YAMLs here;

cat << EOF | oc apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-sc-vmc
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.vsphere.vmware.com
parameters:
  StoragePolicyName: "vSAN Default Storage Policy"
  datastoreURL: "ds:///vmfs/volumes/vsan:3672d400f5fa4515-8a8cb78f6b972f74/"
EOF
Create a Persistent Volume Claim

Finally, we are going to create a PVC. You can find my example PVC files at the same link above.

cat << EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-openshift-vmc-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-sc-vmc
EOF

You can see the PVC created under my cluster > Monitor Tab > Cloud Native Storage in vCenter.

Using Labels

Thanks to one of my colleagues (Jason Monger), who asked me if we could use labels with this integration. And the answer is yes you can.

When creating your PVC, under metadata including your labels such as the able below. These will be pulled into your vCenter UI making it easier to associate your volumes.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-pvc-test
  annotations:
    volume.beta.kubernetes.io/storage-class: csi-sc-vmc
  labels:
    appname: veducate
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
Troubleshooting

For troubleshooting, you need to be aware of the four main containers that run in the vSphere CSI Controller pod and you should investigate the logs from these when you run into issues;

  • CSI-Attacher
  • CSI-Provisoner
  • vSphere-CSI-Controller
  • vSphere-Syncer

Below I have uploaded some of the logs from a successful setup and creation of a persistent volume.

Resources

Regards

OpenShift

How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

Install OpenShift 4.x on vSphere 6.x/7.x

The following procedure is intended to create VM’s from an OVA template booting with static IP’s when the DHCP server can not reserve the IP addresses.

The Problem

OCP requires that all DNS configurations be in place. VMware requires that the DHCP assign the correct IPs to the VM. Since many real installations require the coordination with different teams in an organization, many times we don’t have control of DNS, DHCP or Load balancer configurations.

The CoreOS documentation explain how to create configurations using ignition files. I created a python script to put the network configuration using the ignition files created by the openshift-install program.

Reference Architecture

For this guide, we are going to deploy 3 master nodes (control-plane) and 2 worker nodes (compute This guide uses RHEL CoreOS 4.3 as the virtual machine image, deploying Red Hat OCP 4.3, as per the support of N-1 from Red Hat.

We will use a centralised Linux server (Ubuntu) that will perform the following functions;

  • Load Balancer – HAProxy
  • Web Server – Apache2
  • Terraform automation host – version 0.11.14
    • The deployment will be semi-automated using Terraform, so that we can easily build configuration files used by the CoreOS VM’s that have Static IP settings.
    • Using a later version of Terraform will cause failures.
  • Client Tools for OpenShift deployment
    • OC
    • Kubectl
    • Openshift-install

DNS will be provided by a Windows Server.

The installation will use a Bootstrap server to bring the cluster online, which will be removed at the end of the build process.

Deployment Steps

In this guide we will deploy our environment in the following order;

  • Configure DNS
  • Import Red Hat Core OS image into vCenter
  • Deploy Ubuntu Host
    • Configure Apache
    • Configure HAProxy
    • Install Client-Tools
    • Install Terraform
  • Build OpenShift Cluster configuration
  • Configuring the Terraform deployment
  • Running the Terraform deployment
DNS

Openshift uses a “clusterName.BaseDomain” format.

For example; I want to call my Openshift cluster Demo. And my DNS Domain is Simon.local, then my full format used by Openshift is “demo.simon.local”

Below is a table plan of the IP addresses you will use to build the environment.

The last three addresses are cluster level resources that are available on each control-plane node, accessible via the load balancer.

To configure the DNS records in Windows, you can use the Script and CSV file here

In the below screenshot, the script has created the “demo” domain folder and entered my records. It is important that you have PTR records setup for everything apart from the “etcd-X” records.

Import Red Hat CoreOS Image into vCenter

Continue reading How to deploy OpenShift 4.3 on VMware vSphere with Static IP addresses using Terraform

vRealize Operations – Monitoring OpenShift Container Platform environments

The latest release of  vRealize Operations (the “manager” part of the product name has now been dropped), brings the ability to manage your Kubernetes environments from the vSphere infrastructure up.

The Kubernetes integration in vRealize Operations 8.1;

  • vSphere with Kubernetes integration:
    • Ability to discover vSphere with Kubernetes objects as part of the vCenter Server inventory.
    • New summary pages for Supervisor Cluster, Namespaces, Tanzu Kubernetes cluster, and vSphere Pods.
    • ​Out-of-the-box dashboards, alerts, reports, and views for vSphere with Kubernetes.
  • The VMware Management Packs that are new and those that are updated for vRealize Operations Manager 8.1 are:
    • VMware vRealize Operations Management Pack for Container Monitoring 1.4.3

Where does OpenShift Container Platform fit in?

All though the above highlighted release notes point towards vSphere with Kubernetes (aka project pacific), the Container monitoring management pack has been available for a while and has received a number of updates.

This management pack can be used with any of your Kubernetes setups. Bringing components into your infrastructure monitoring view;

  • Kubernetes;
    • Clusters
    • Nodes
    • Pods
    • Containers
    • Services

So this means you can add in your OCP environment for monitoring.

Configuring vRealize Operations to monitor your OpenShift Clusters

Grab the latest Container monitoring management pack to be installed in your vRealize Operations environment.

  1. Log in to the vRealize Operations Manager with administrator privileges.
  2. In the menu, select Administration and in the left pane select Solutions > Repository.
  3. On the Repository tab, click Add/Upgrade.
  4. Browse to locate the temporary folder and select the PAK file.
  5. Click Upload. The upload might take several minutes.
  6. Read and accept the EULA,and click Next.
  7. When the vRealize Operations Management Pack for Container Monitoring is installed, click Finish.

To link any Kubernetes to your environment for monitoring, you need to install the cAdvisor Daemon.  For OCP I used the cAdvisor YAML Definition on HostPort, secondly you need to create some credentials to authenticate to your cluster from your connection in vROPs. Continue reading vRealize Operations – Monitoring OpenShift Container Platform environments