After moving my life over to a new Macbook and installing the latest AWS CLI tools including “aws-iam-authenticator” tool, I couldn’t run commands against my EKS Clusters. I kept hitting the following issue;
> kubectl get pods
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
The Cause
AWS updated the aws-iam-authenicator component in version 0.5.4 to require v1beta1 your kubeconfig file for the cluster context. You will be using v1alpha1 more than likely, which generates this error.
Update your kubeconfig file as necessary, replacing “v1alpha1” for “v1beta1” for any contexts for EKS clusters.
vi ~/.kube/config
# Alernatively you could run something like the below to automate the changes. This will also create a "config.bak" file of the orignal file before the changes
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
Below you can see I used the “sed” command, checked my file using “vi” then run the kubectl command successfully.
I had to remove a demo EKS Cluster where I had screwed up an install of a Service Mesh. Unfortunately, it was left in a rather terrible state to clean up, hence the need to just delete it.
When I tried the usual eksctl delete command, including with the force argument, I was hitting errors such as:
2021-12-21 23:52:22 [!] pod eviction error ("error evicting pod: istio-system/istiod-76f699dc48-tgc6m: Cannot evict pod as it would violate the pod's disruption budget.") on node ip-192-168-27-182.us-east-2.compute.internal
With a final error output of:
Error: Unauthorized
The Cause
Well, the error message does call out the cause, moving the existing pods to other nodes is failing due to the configured settings. Essentially EKS will try and drain all the nodes and shut everything down nicely when it deletes the cluster. It doesn’t just shut everything down and wipe it. This is because inside of Kubernetes there are several finalizers that will call out actions to interact with AWS components (thanks to the integrations) and nicely clean things up (in theory).
To get around this, I first tried the following command, thinking if delete the nodegroup without waiting for a drain, this would bypass the issue:
eksctl delete nodegroup standard --cluster veducate-eks --drain=false --disable-eviction
This didn’t allow me to delete the cluster however, I still got the same error messages.
The Fix
So back to the error message, and then I realised it was staring me in the face!
Cannot evict pod as it would violate the pod's disruption budget
What is a Pod Disruption Budget? It’s essentially a way to ensure availability of your pods from someone killing them accidentality.
A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.
This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.
Requirement
Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native AWS EKS clusters, it can however manage most Kubernetes distributions.
Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream, and provide additional functions on making these EKS clusters usable.
High Level Steps
Create a Code Stream Pipeline
Create a AWS EKS Cluster
Create EKS cluster as endpoint in both Code Stream and Cloud Assembly
Register EKS cluster in Tanzu Mission Control
Pre-Requisites
vRA Cloud access
The pipeline can be changed easily for use with vRA on-prem