Tag Archives: EKS

AWS EKS Header

EKS – Kubectl – Unable to connect to the server – Exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

The Issue

After moving my life over to a new Macbook and installing the latest AWS CLI tools including “aws-iam-authenticator” tool, I couldn’t run commands against my EKS Clusters. I kept hitting the following issue;

> kubectl get pods

Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
eks - aws-iam-authenticator - unable to connect to cluster
The Cause

AWS updated the aws-iam-authenicator component in version 0.5.4 to require v1beta1 your kubeconfig file for the cluster context. You will be using v1alpha1 more than likely, which generates this error.

The Fix

Update your kubeconfig file as necessary, replacing “v1alpha1” for “v1beta1” for any contexts for EKS clusters.

vi ~/.kube/config

# Alernatively you could run something like the below to automate the changes. This will also create a "config.bak" file of the orignal file before the changes

sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config

eks - aws-iam-authenticator - v1alpha1 - v1beta1 - kubeconfig file

Below you can see I used the “sed” command, checked my file using “vi” then run the kubectl command successfully.

eks - aws-iam-authenticator - sed -i .bak -e 's:v1alpha1:v1beta1:' ~:.kube:config

Official GitHub Page

 

Regards

Dean Lewis

AWS EKS Header

Quick Fix – AWS Console – Current user or role does not have access to Kubernetes objects on this EKS Cluster

The Issue

Once you’ve deployed an EKS cluster, and try to view this in the AWS Console, you are presenting the following message:

Your current user or role does not have access to Kubernetes objects on this EKS Cluster

AWS Console - Container Services - Current user or role does not have access to Kubernetes objects on this EKS Cluster

The Cause

This is because you need to run some additional configuration on your cluster to allow your AWS user IAM to access the cluster.

The Fix

Grab your User ARN from the Identity and Access Management (IAM) page.

aws console - user IAM

Download this template YAML file for configuring the necessary ClusterRole and ClusterRoleBinding and then apply it to your EKS cluster.

curl -o eks-console-full-access.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/docs/eks-console-full-access.yaml

kubectl apply -f eks-console-full-access.yaml

apply eks console full access configmap

Now edit the following configmap:

kubectl edit configmap/aws-auth -n kube-system

Add in the following under the data tree:

mapUsers: |
  - userarn: arn:aws:iam::3xxxxxxx7:user/[email protected]
    username: admin
    groups:
      - system:masters

apply eks console full access - edit configmap

After a minute or so, once you revisit the EKS Cluster page in the AWS console, you will see all the relevant details.

AWS Console - Container Services - EKS cluster view

Regards

Dean Lewis

AWS EKS Header

Deleting AWS EKS Cluster Fails? Learn How to Fix “Cannot Evict Pod as it Violates Disruption Budget” Error

The Issue

I had to remove a demo EKS Cluster where I had screwed up an install of a Service Mesh. Unfortunately, it was left in a rather terrible state to clean up, hence the need to just delete it.

When I tried the usual eksctl delete command, including with the force argument, I was hitting errors such as:

2021-12-21 23:52:22 [!] pod eviction error ("error evicting pod: istio-system/istiod-76f699dc48-tgc6m: Cannot evict pod as it would violate the pod's disruption budget.") on node ip-192-168-27-182.us-east-2.compute.internal

With a final error output of:

Error: Unauthorized

eksctl delete cluster - Cannot evict pod as it would violate the pod's disruption budget - Error Unauthorized

The Cause

Well, the error message does call out the cause, moving the existing pods to other nodes is failing due to the configured settings. Essentially EKS will try and drain all the nodes and shut everything down nicely when it deletes the cluster. It doesn’t just shut everything down and wipe it. This is because inside of Kubernetes there are several finalizers that will call out actions to interact with AWS components (thanks to the integrations) and nicely clean things up (in theory).

To get around this, I first tried the following command, thinking if delete the nodegroup without waiting for a drain, this would bypass the issue:

 eksctl delete nodegroup standard --cluster veducate-eks --drain=false --disable-eviction

This didn’t allow me to delete the cluster however, I still got the same error messages.

The Fix

So back to the error message, and then I realised it was staring me in the face!

Cannot evict pod as it would violate the pod's disruption budget

What is a Pod Disruption Budget? It’s essentially a way to ensure availability of your pods from someone killing them accidentality.

A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.

To find all configured Pod Disruption Budgets:

kubectl get poddisruptionbudget -A

Then delete as necessary:

kubectl delete poddisruptionbudget {name} -n {namespace}

eks - kubectl get poddisruptionbudgets -A - kubectl delete poddisruptionbudgets

Finally, you should be able to delete your cluster.

eksctl delete cluster - successful

 

Regards

Dean Lewis

vRA EKS Tanzu Mission Control Header

Using vRA to deploy AWS EKS Clusters and register with Tanzu Mission Control

This walk-through will detail the technical configurations for using vRA Code Stream to deploy AWS EKS Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

Requirement

Tanzu Mission Control has some fantastic capabilities, including the ability to deploy Tanzu Kubernetes Clusters to various platforms (vSphere, AWS, Azure). However today there is no support to provision native AWS EKS clusters, it can however manage most Kubernetes distributions.

Therefore, when I was asked about where VMware could provide such capabilities, my mind turned to the ability to deploy the clusters using vRA Code Stream, and provide additional functions on making these EKS clusters usable.

High Level Steps
  • Create a Code Stream Pipeline
    • Create a AWS EKS Cluster
    • Create EKS cluster as endpoint in both Code Stream and Cloud Assembly
    • Register EKS cluster in Tanzu Mission Control
Pre-Requisites
  • vRA Cloud access
    • The pipeline can be changed easily for use with vRA on-prem
  • AWS Account that can provision EKS clusters
  • A Docker host to be used by Code Stream
  • Tanzu Mission Control account that can register new clusters
  • VMware Cloud Console Tokens for vRA Cloud and Tanzu Mission Control API access
  • The configuration files for the pipeline can be found in this GitHub repository
Creating a Code Stream Pipeline to deploy a AWS EKS Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Using vRA to deploy AWS EKS Clusters and register with Tanzu Mission Control