Tag Archives: AWS

AWS EKS Header

EKS – Kubectl – Unable to connect to the server – Exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

The Issue

After moving my life over to a new Macbook and installing the latest AWS CLI tools including “aws-iam-authenticator” tool, I couldn’t run commands against my EKS Clusters. I kept hitting the following issue;

> kubectl get pods

Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
eks - aws-iam-authenticator - unable to connect to cluster
The Cause

AWS updated the aws-iam-authenicator component in version 0.5.4 to require v1beta1 your kubeconfig file for the cluster context. You will be using v1alpha1 more than likely, which generates this error.

The Fix

Update your kubeconfig file as necessary, replacing “v1alpha1” for “v1beta1” for any contexts for EKS clusters.

vi ~/.kube/config

# Alernatively you could run something like the below to automate the changes. This will also create a "config.bak" file of the orignal file before the changes

sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config

eks - aws-iam-authenticator - v1alpha1 - v1beta1 - kubeconfig file

Below you can see I used the “sed” command, checked my file using “vi” then run the kubectl command successfully.

eks - aws-iam-authenticator - sed -i .bak -e 's:v1alpha1:v1beta1:' ~:.kube:config

Official GitHub Page

 

Regards

Dean Lewis

Cloudflare Route53 Header

Configuring DNS Delegation from CloudFlare to AWS Route53

This blog post covers how to delegate DNS control from Cloudflare to AWS Route53. So that you can host records in Route53 for services deployed into AWS, that are resolvable publicly, despite your primary domain being held by another provider (Cloudflare).

My working example for this, I was creating an OpenShift cluster in AWS using the IPI installation method, meaning the installation will create any necessary records in AWS Route 53 on your behalf. I couldn’t rehost my full domain in Route53, so I just decided to delegate the subdomain.

  • You will need access to your Cloudflare console and AWS console.

Open your AWS Console, go to Route53, and create a hosted zone.

AWS - Route 53 - Create Hosted Zone

Configure a domain name, this will be along the lines of {subddomain}.{primarydomain}, for example my main domain name is veducate.co.uk, the sub domain I want AWS to manage is example.veducate.co.uk.

I’ve selected this to be a public type, so that I can resolve the records I create publicly.

AWS - Route 53 - Create Hosted Zone - Configuration

Now my zone is created, I have four Name Servers which will host this zone (Red Box). Take a copy of these.

AWS - Route 53 - Hosted Zone - NS Servers

In your DNS provider, for this example, Cloudflare, create a record of type: NS (Name Server), the record name is subdomain, and Name Server is one of the four provided by AWS Route53 Hosted Zone.

Repeat this for each of the four servers.

Cloudflare - create ns record

Below you can see I’ve created the records to map to each of the AWS Route53 Name Servers.

Cloudflare - create ns record - all records created

Now back in our AWS Console, for the Route53 service within my hosted zone. I can start to create records.

AWS - Route53 - Create record

Provide the name, type and value and create.

AWS - Route53 - Quick create record

Below you can see the record has been created.

AWS - Route53 - Records

And finally, to test, we can see the DNS record resolving from my laptop.

nslookup example

Regards

Dean Lewis

AWS EKS Header

Quick Fix – AWS Console – Current user or role does not have access to Kubernetes objects on this EKS Cluster

The Issue

Once you’ve deployed an EKS cluster, and try to view this in the AWS Console, you are presenting the following message:

Your current user or role does not have access to Kubernetes objects on this EKS Cluster

AWS Console - Container Services - Current user or role does not have access to Kubernetes objects on this EKS Cluster

The Cause

This is because you need to run some additional configuration on your cluster to allow your AWS user IAM to access the cluster.

The Fix

Grab your User ARN from the Identity and Access Management (IAM) page.

aws console - user IAM

Download this template YAML file for configuring the necessary ClusterRole and ClusterRoleBinding and then apply it to your EKS cluster.

curl -o eks-console-full-access.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/docs/eks-console-full-access.yaml

kubectl apply -f eks-console-full-access.yaml

apply eks console full access configmap

Now edit the following configmap:

kubectl edit configmap/aws-auth -n kube-system

Add in the following under the data tree:

mapUsers: |
  - userarn: arn:aws:iam::3xxxxxxx7:user/[email protected]
    username: admin
    groups:
      - system:masters

apply eks console full access - edit configmap

After a minute or so, once you revisit the EKS Cluster page in the AWS console, you will see all the relevant details.

AWS Console - Container Services - EKS cluster view

Regards

Dean Lewis

Tanzu Nvidia Header

Deploying Nvidia GPU enabled Tanzu Kubernetes Clusters

In this blog post I’m going to detail how deploy and configure a Nvidia GPU enabled Tanzu Kubernetes Grid cluster in AWS. The method will be similar for Azure, for vSphere there are a number of additional steps to prepare the system. I’m going to essentially follow the official documentation, then run some of the Nvidia tests. Like always, it’s good to get a visual reference and such for these kinds of deployments.

Pre-Reqs
  • Nvidia today only support Ubuntu deployed images in relation to a TKG deployment
  • For this blog I’ve already deployed my TKG Management cluster in AWS
Deploy a GPU enabled workload cluster

It’s simple, just deploy a workload cluster that for the compute plane nodes (workers) that uses a GPU enabled instance.

You can create a new cluster YAML file from scratch, or clone one of your existing located in:

~/.config/tanzu/tkg/clusterconfigs

Below are the four main values you will need to change. As mentioned above, you need a GPU enabled instance, and for the OS to be Ubuntu. The OS version will default if not set to 20.04.

CONTROL_PLANE_MACHINE_TYPE: t3.large
NODE_MACHINE_TYPE: g4dn.xlarge
OS_ARCH: amd64
OS_NAME: ubuntu
OS_VERSION: "20.04

The rest of the file you configure as you would for any workload cluster deployment. Continue reading Deploying Nvidia GPU enabled Tanzu Kubernetes Clusters

VMware Tanzu Header

Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’

The issue

When deploying Tanzu Kubernetes Grid to AWS, the deployment was failing with the following output:

unable to set up management cluster, : unable to wait for cluster and get the cluster kubeconfig: error waiting for cluster to be provisioned (this may take a few minutes): cluster creation failed, reason:'InstanceProvisionFailed @ Machine/tkg-aws-mgmt-control-plane-dqb4v', message:'1 of 2 completed'
The Cause

When we reviewed the CAPA logs (Cluster API AWS provider) we found the following errors logged: Continue reading Deploying Tanzu Kubernetes Grid to AWS fails with ‘InstanceProvisionFailed’