Tag Archives: Kubernetes

o WOMAN JOB INTERVIEW facebook

Interview with Daniel Bryant, Ambassador Labs – Kubernetes, PaaS, Err what’s next?

I’m really excited to get this interview out of the door. I missed Daniel’s session at KubeCon, “From Kubernetes to PaaS to … Err, What’s Next?”. The room was packed, I wasn’t able to sit in, so instead I watched it from the KubeCon live stream, sat on the beanbags in the hallway.

The session was fantastic, but I couldn’t ask any questions afterwards. So I dropped Daniel a message on twitter, and he agreed to chat, and be recorded for an interview.

Originally, we parked 25 minutes for the interview, but had so much fun we ended up at 47 minutes or so. Rather than cut everything down back to the 25 minutes mark. I decided to split the interview in two halves, so you can listen during your coffee breaks.

We break down Daniel’s KubeCon session in more depth, but importantly for me, give it a platform/infrastructure operations spin, as this is my background in IT as I build my knowledge in the Cloud Native world and learn knew technology and software.

I hope you enjoy it as much as I did recording it! (YouTube Playlist).

Part 1

Part 2

Regards

Dean Lewis

VMC Tanzu Header

VMware Cloud on AWS – Managed Tanzu Kubernetes Grid with Tanzu Mission Control

In my previous blog post, I detailed a full end to end guide in deploying and configurating the managed Tanzu Kubernetes Service offering as part of VMware Cloud on AWS (VMC), finishing with some example application deployments and configurations.

In this blog post, I am moving on to show you how to integrate this environment with Tanzu Mission Control, which will provide fleet management for your Kubernetes instances. I’ve wrote several blog posts on TMC previous which you can find below:

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application
Management with Tanzu Mission Control

The first step is to connect the Supervisor cluster running in VMC to our Tanzu Mission Control environment.

Connecting the Supervisor Cluster to TMC

Within the TMC console, go to:

  • Administration
  • Management Clusters
  • Register Management Cluster
    • Select “vSphere with Tanzu”

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster

On the Register Management Cluster page:

  • Set the friendly name for the cluster in TMC
  • Select the default cluster group for managed workload clusters to be added into
  • Set any description and labels as necessary

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Name and Assign

  • Proxy settings for a Supervisor Cluster running in VMC are not supported, so ignore Step 2.

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Proxy Configuration

  • Copy the registration URL.

Managed Tanzu Kubernetes Service - VMC - TMC - Register Management Cluster - Register

  • Log into your vSphere with Tanzu Supervisor cluster.
  • Find the namespace that identifies your cluster and is used for TMC configurations, “kubectl get ns”
    • It will start “svc-tmc-xx”
    • Copy this namespace name

Managed Tanzu Kubernetes Service - VMC - TMC - Supervisor Cluster - Kubectl get namespace Continue reading VMware Cloud on AWS – Managed Tanzu Kubernetes Grid with Tanzu Mission Control

Tanzu Blog Logo Header

Tanzu Kubernetes Grid – Manual Certificate Renewal

The Issue
Note: VMware has released a full in-depth KB Article that I'd advise you review along with this blog post. If you have any queries or concerns with the processes detailed, always open a support ticket!
- How to rotate certificates in a Tanzu Kubernetes Grid cluster (86251)

One day my Kubernetes cluster just stopped responding. I could no longer connect to the Kubernetes API.

I rebooted all the nodes (as it was a demo environment) to no luck, and still nothing. So I had to go off digging.

The Cause

I SSH’d into one of my control-plane nodes, and started to tail the kubelet logs. Continue reading Tanzu Kubernetes Grid – Manual Certificate Renewal

vSphere Kubernetes Drivers Operator - Red Hat OpenShift - Header

Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

What is the vSphere Kubernetes Driver Operator (VDO)?

This Kubernetes Operator has been designed and created as part of the VMware and IBM Joint Innovation Labs program. We also talked about this at VMworld 2021 in a joint session with IBM and Red Hat. With the aim of simplifying the deployment and lifecycle of VMware Storage and Networking Kubernetes driver plugins on any Kubernetes platform, including Red Hat OpenShift.

This vSphere Kubernetes Driver Operator (VDO) exposes custom resources to configure the CSI and CNS drivers, and using Go Lang based CLI tool, introduces validation and error checking as well. Making it simple for the Kubernetes Operator to deploy and configure.

The Kubernetes Operator currently covers the following existing CPI, CSI and CNI drivers, which are separately maintained projects found on GitHub.

This operator will remain CNI agnostic, therefore CNI management will not be included, and for example Antrea already has an operator.

Below is the high level architecture, you can read a more detailed deep dive here.

vSphere Kubernetes Drivers Operator - Architecture Topology

Installation Methods

You have two main installation methods, which will also affect the pre-requisites below.

If using Red Hat OpenShift, you can install the Operator via Operator Hub as this is a certified Red Hat Operator. You can also configure the CPI and CSI driver installations via the UI as well.

  • Supported for OpenShift 4.9 currently.

Alternatively, you can install the manual way and use the vdoctl cli tool, this method would also be your route if using a Vanilla Kubernetes installation.

This blog post will cover the UI method using Operator Hub.

Pre-requisites

Continue reading Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

AWS EKS Header

Deleting AWS EKS Cluster Fails? Learn How to Fix “Cannot Evict Pod as it Violates Disruption Budget” Error

The Issue

I had to remove a demo EKS Cluster where I had screwed up an install of a Service Mesh. Unfortunately, it was left in a rather terrible state to clean up, hence the need to just delete it.

When I tried the usual eksctl delete command, including with the force argument, I was hitting errors such as:

2021-12-21 23:52:22 [!] pod eviction error ("error evicting pod: istio-system/istiod-76f699dc48-tgc6m: Cannot evict pod as it would violate the pod's disruption budget.") on node ip-192-168-27-182.us-east-2.compute.internal

With a final error output of:

Error: Unauthorized

eksctl delete cluster - Cannot evict pod as it would violate the pod's disruption budget - Error Unauthorized

The Cause

Well, the error message does call out the cause, moving the existing pods to other nodes is failing due to the configured settings. Essentially EKS will try and drain all the nodes and shut everything down nicely when it deletes the cluster. It doesn’t just shut everything down and wipe it. This is because inside of Kubernetes there are several finalizers that will call out actions to interact with AWS components (thanks to the integrations) and nicely clean things up (in theory).

To get around this, I first tried the following command, thinking if delete the nodegroup without waiting for a drain, this would bypass the issue:

 eksctl delete nodegroup standard --cluster veducate-eks --drain=false --disable-eviction

This didn’t allow me to delete the cluster however, I still got the same error messages.

The Fix

So back to the error message, and then I realised it was staring me in the face!

Cannot evict pod as it would violate the pod's disruption budget

What is a Pod Disruption Budget? It’s essentially a way to ensure availability of your pods from someone killing them accidentality.

A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.

To find all configured Pod Disruption Budgets:

kubectl get poddisruptionbudget -A

Then delete as necessary:

kubectl delete poddisruptionbudget {name} -n {namespace}

eks - kubectl get poddisruptionbudgets -A - kubectl delete poddisruptionbudgets

Finally, you should be able to delete your cluster.

eksctl delete cluster - successful

 

Regards

Dean Lewis