Category Archives: Kubernetes

Red Hat OpenShift - Cilium CNI Migration - Header

How to migrate from Red Hat OpenShiftSDN/OVN-Kubernetes to Cilium

Recently, I’m seeing more and more queries about migrating to Cilium within an existing Red Hat OpenShift cluster, due to Cilium’s advanced networking capabilities, robust security features, and enhanced observability out-of-the-box. This increase of interest is also boosted by the fact that Cilium became the first Kubernetes CNI to graduate in the CNCF Landscape.

In this blog post, we’ll cover the step-by-step process of migrating from the traditional OpenShiftSDN (default CNI pre-4.12) or OVN-Kubernetes (default CNI from 4.12) to Cilium, exploring the advantages and considerations along the way.

If you need to understand more about the default CNI options in Red Hat OpenShift first, then I highly recommend this blog post, as pre-reading before going through this walkthrough.

Cilium Overview

For those of you who have not heard of Cilium, or maybe just the name and know there’s a buzz about it. In short Cilium, is a cloud native networking solution to provide security, networking and observability at a software level.

The reason why the buzz is so huge is due to being implemented using eBPF, a new way of interacting and programming with the kernel layer of the OS. This implementation opens a whole new world of options.

I’ll leave you with these two short videos from Thomas Graf, co-founder of Isovalent, the creators of Cilium.

Does Red Hat support this migration?

Cilium has achieved the Red Hat OpenShift Container Network Interface (CNI) certification by completing the operator certification and passing end-to-end testing. Red Hat will support Cilium installed and running in a Red Hat OpenShift cluster, and collaborate as needed with the ecosystem partner to troubleshoot any issues, as per their third-party software support statements. This would be a great reason to look at Isovalent Enterprise for Cilium, rather than using Cilium OSS, to get support from both vendors.

However, when it comes to performing a CNI migration for an active existing OpenShift cluster, Red Hat provides no guidance, unless it’s migrating from OpenShiftSDN to OVN-Kubernetes.

This means CNI migration to a third party CNI in an existing running Red Hat OpenShift Cluster is a grey area.

I’d recommend speaking to your Red Hat account team before performing any migration like this in your production environments. I have known large customers to take on this work/burden/supportability themselves and be successful.

Follow along with this video!

If you prefer watching a video or seeing things live and following along, like I do at times, then I’ve got you covered with the below video that covers the content from this blog post.

Pre-requisites and OpenShift Cluster configuration
As per the above, understand this process in detail, and if you follow it, you do so at your own risk.

For this walkthrough, I’ve deployed a OpenShift 4.13 cluster with OVN-Kubernetes, with a sample application (see below). You can see these posts I’ve written for deployments of OpenShift, or follow the official documentation.

Here is a copy of my install-config.yaml file. It was generated using the openShift-install create install-config wizard. Then I ran the openshift-install create cluster command. Continue reading How to migrate from Red Hat OpenShiftSDN/OVN-Kubernetes to Cilium

Grafana Header

Grafana – unable to login “User already exists”

The Issue

When trying to log into Grafana Web UI using an OIDC provider, in my case, Dex. The login would fail due to the error “User already exists”, after some time. This happened for any users given access via the OIDC.

The Cause

This looks to happen due to a CVE fix implemented in Grafana as documented in the two comments below:

The Fix

To resolve this issue, for Grafana 10.0.x and 9.5.6, the env variable GF_AUTH_OAUTH_ALLOW_INSECURE_EMAIL_LOOKUP can be set or the config key oauth_allow_insecure_email_lookup can be set under the auth section.

[auth]
oauth_allow_insecure_email_lookup=true

Source + Source 2

Hope this helps anyone stuck out there!

Regards

Dean Lewis

Kubernetes

Kubernetes Metric Server – cannot validate certificate because it doesn’t contain any IP SANs

The Issue

Whilst trying to install the Metric’s server:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

so I could use kubectl top node for it’s metrics on Node resource useage, I found the pods were not loading, and upon inspection found the following:

> kubectl logs -n kube-system metrics-server-6f6cdbf67d-v6sbf 

I0717 12:19:32.132722 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E0717 12:19:39.159422 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.49.2:10250/metrics/resource\": x509: cannot validate certificate for 192.168.49.2 because it doesn't contain any IP SANs" node="minikube"

The Cause

The issue here was due to the installation of Cert-Manager and setting up some TLS configurations within the CNI and Self-Signed certificates, the metric’s server wasn’t able to validate the authority of the Kubernetes API

The Fix

As this is communication within the cluster, I could simply fix this by telling Metric Server container to trust the insecure certificates from the API using the below
kubectl patch command:

kubectl patch deployment metrics-server -n kube-system --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

 

Regards

Dean Lewis

Kubernetes

Quick Tip: Supercharge Kubernetes Resource Retrieval with ‘kubectl get -f’

Did you know you can use the -f argument with kubectl get? Yep me either.

It’s pretty handy actually, as it will provide the status for all your Kubernetes resources deployed using that file or even file from hyperlink!

Below is a screenshot example using a file.

kubectl get -f

You can also specify multiple files by adding -f {file}for each file you want to check (this also works when deploying resources too!).

kubect get -f multiple files

And another example, using a hyperlink as the file location.

kubectl get -f from url

Hope this little tip helps someone!

Just for fun, here’s the ChatGPT write-up!

The command “kubectl get -f” is used in Kubernetes to retrieve information about Kubernetes resources defined in a file or multiple files. Let’s break down the command and its components:

  • “kubectl”: It is the command-line interface (CLI) tool for interacting with Kubernetes clusters. It allows you to manage and control Kubernetes resources.
  • “get”: It is the action or subcommand used to retrieve information about Kubernetes resources.
  • “-f”: It is a flag that specifies that the input will be provided from a file or multiple files rather than directly on the command line. It is followed by the path to the file(s) containing the Kubernetes resource definitions.

When you use “kubectl get -f <file>”, Kubernetes reads the file(s) provided and retrieves the information about the resources defined within those files. The information can include the names, statuses, and other details of the resources.

For example, if you have a file named “deployment.yaml” that defines a Kubernetes Deployment resource, you can use the command “kubectl get -f deployment.yaml” to retrieve information about that specific Deployment resource.

You can also provide multiple files by separating them with commas or specifying a directory containing multiple resource files. For instance, “kubectl get -f file1.yaml,file2.yaml” or “kubectl get -f /path/to/files” (where /path/to/files is the directory path).

By using this command, you can quickly retrieve information about Kubernetes resources defined in files without needing to manually create or modify resources using the command line.

Regards

Dean Lewis

Red Hat OpenShift Header

Red Hat OpenShift – Sorry, your reply was invalid: IP expected to be in one of the machine networks

The Issue

When running the command:

openshift-install create cluster

And you provide an API IP address which is not in the CIDR range 10.0.0.0/16, you recieve the below error.

INFO Defaulting to only available network: VM Network 
X Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16
? The VIP to be used for the OpenShift API.
OpenShift-Install create cluster - Sorry, your reply was invalid- IP expected to be in one of the machine networks- 10.0.0.0-16
The Cause

This is a known bug in the openshift-install tool (GitHub PR,Red Hat Article), where by the software installer is hardcoded to only accept addresses in the 10.0.0.0/16 range.

The Fix

The current work around for this is to run openshift-install create install-config provide ip addresses in the 10.0.0.0/16 range, and then alter the install-config.yaml file manually before running openshift-install create cluster, which will read the available install-config.yaml file and create the cluster (rather than presenting you another wizard).

In the wizard (below screenshot), I’ve provided IP’s on the range from above, and set my base domain and cluster name as well. The final piece is to paste in my Pull Secret from the Red Hat Cloud console.

OpenShift-install create install-config

Now if I run ls on my current directory I’ll see the install-config.yaml file. It is recommended to save this file now before you run the create cluster command, as this file will be removed after this, as it contains plain text passwords.

I’ve highlighted in the below image the lines we need to alter.

OpenShift install install config.yaml file

For the section:

machineNetwork: - cidr: 10.0.0.0/16

This needs to be changed to the network subnet the nodes will run on. And for the platform section, you need to map the right IP addresses from your DNS records.

platform:
  vsphere:
    apiVIP: 192.168.200.192 <<<<<<< This is your api.{cluster_name}.{base_domain} DNS record
    cluster: Cluster-1
    folder: /vEducate-DC/vm/OpenShift/
    datacenter: vEducate-DC
    defaultDatastore: Datastore01
    ingressVIP: 192.168.200.193 <<<<<<< This is your *.apps.{cluster_name}.{base_domain} DNS record

Now that we have our correctly configured install-config.yaml file, we can proceed with the installation of the cluster, which after running the openshift-install create cluster command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the --log-level= argument at the end of the command.

Regards

Dean Lewis