VMware Tanzu Header

Tanzu Mission Control – Delete a provisioned cluster

In this blog post we are going to cover off how to delete a Tanzu Kubernetes Grid cluster that has been provisioned by Tanzu Mission Control. We will cover the following areas:

Below are the other blog posts in the series.

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

We are going to use the cluster I created in my last blog post.

Below are my EC2 instances that make up my TMC provisioned cluster, here I have filtered my view using the field “tmc.cloud.vmware.com/cluster” + cluster name.

Tanzu Mission Control - AWS Consoles - Instances - Filtered tmc.cloud.vmware.com

Deleting a Provisioned cluster in the TMC UI

In the TMC UI, going to the clusters view, you can click the three dots next to the cluster you want to remove and select delete.

Tanzu Mission Control - Clusters - Delete cluster

Alternatively, within the cluster object view, click actions then delete.

Tanzu Mission Control - Cluster Object - Delete cluster

Both options will bring up the below confirmation dialog box.

You select one of the following options:

  • Delete and remove agent (recommended)
    • Remove from TMC and delete agent extensions
  • Manually delete agent extensions
    • A secondary option whereby a manual removal is needed if a cluster delete fails

Enter the name of the cluster you want to delete, to confirm the cluster deletion.

Tanzu Mission Control - Cluster Object - Delete cluster - Confirm Continue reading Tanzu Mission Control – Delete a provisioned cluster

VMware Tanzu Header

Tanzu Mission Control – Deploying TKG Clusters to AWS

This blog post will cover a technical walk-through on using Tanzu Mission Control to deploy Tanzu Kubernetes clusters to AWS.

The follow up blog posts in this series are:

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application
Using the AWS Hosted Management Cluster

In this example, we will use the default provided AWS Hosted Management cluster.

Alternatively, you can use the Tanzu CLI to provision a TKG Management cluster into AWS and attach this to Tanzu Mission Control.

Currently it is not supported to have a Management Cluster manage clusters across platforms.

  • I.e. Management Cluster in AWS that manages workload clusters in Azure.

To get started:

  1. Go to Administration
  2. Click the Management Clusters Tab
  3. Click on the “aws-hosted” cluster object name

TMC - Administration - Management Clusters

Create a provisioner

The default tab when selecting the “aws-hosted” management cluster object is the provisioner tab.

  • Click create provisioner

TMC - aws-hosted - provisioners - create provisioner

  • Provide a name for the provisioner
  • Click confirm

TMC - aws-hosted - provisioners - create provisioner - provide name

You will be taken back to your provisioner object which is created. Using the radio button to select the object will allow you to delete it. No other action is available.

TMC - aws-hosted - provisioners - provisioner created

Create the AWS account
  1. Click on accounts tab
  2. Click the “Create Account Credential” Button

TMC - aws-hosted - accounts - create account credential Continue reading Tanzu Mission Control – Deploying TKG Clusters to AWS

github logo

Download Releases from Github using Curl and Wget

The issue

I was trying to download a software release from GitHub using Curl and hitting an issue, the file wasn’t large enough for a start, it was unusable and clearly not the file I expected.

curl -O https://github.com/kastenhq/external-tools/releases/download/3.0.12/k10tools_3.0.12_linux_amd64


% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 635 100 635 0 0 2387 0 --:--:-- --:--:-- --:--:-- 2387

I tested this in a browser and found the link redirects elsewhere, hence the issue.

The Fix

Just needed to specify a few extra arguments.

curl -LJO https://github.com/kastenhq/external-tools/releases/download/3.0.12/k10tools_3.0.12_linux_amd64

# Explanation of the arguments

-L, --location      Follow redirects
-J, --remote-header-name Use the header-provided filename
-O, --remote-name   Write output to a file named as the remote file

If you wish to use wget instead.

wget --content-disposition https://github.com/kastenhq/external-tools/releases/download/3.0.12/k10tools_3.0.12_linux_amd64

# Explanation of the argument 
--content-disposition        honor the Content-Disposition header when choosing local file names (EXPERIMENTAL)

Regards

Docker Header

Using Docker to update and commit to a container image

I was helping a customer build some customized automation tasks using vRealize Automation Codestream. These tasks required the use of a container image with certain tools installed, usually we can include a CI task to download the tools into the container image on the fly. However, my customer’s environment is offline, so I needed to provide them a container image with everything installed by default.

Before we dive into the process of running a container and committing the changes, it is recommended where possible to create a new docker file that would build your docker image as needed with the associated commands such as the below:

FROM node:12-alpine
RUN apk add --no-cache python g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Committing changes to a container image in this way can cause the image to become bloated. But sometimes there’s a need to do just do it this way.

Prerequisites
Pull the image you want to update
docker pull {image location/name}

docker pull image

Check your images and get the ID
docker images

For the next command, we will need the Image ID.docker images

Run your image as an active container
docker run -it {Image_ID} /bin/bash

This will then drop you into the tty of the running container.

docker run -it image_id bash

Modify your container

Continue reading Using Docker to update and commit to a container image

kubestr header

Kubestr – Open-Source Kubernetes Storage benchmarking tool

Kubernetes is a platform which is starting to lose the need for introduction in most settings. However, it still can be a complex beast to get to grips with, and getting your infrastructure components configured correctly is key to providing a successful Kubernetes environment for your applications.

One of these such areas is storage.

With your Kubernetes platform, you need to ensure a correct storage configuration, and benchmark your storage performance, like you would with any other platform. Then test the container storage features such as snapshots of the persistent volumes.

The configuration for each vendor when integrating with Kubernetes will be different, but the outcomes should be the same.

What is Kubestr?

Enter Kubestr, the Open-Source project tool from Kasten by Veeam, designed to help with ensuring your storage is configured correctly, help you benchmark the performance and test features such as snapshots.

Getting started with Kubestr

Simply Download Kubestr for the platform you wish to run the tool from. For me I’ll be running it from my Mac OS X machine, which has connectivity to my Kubernetes platform (AWS EKS, I used this blog to create it).

I extracted the zip file and have the Kubestr command line tool available in the output folder.

kubestr download and extract

Running the tool for the first time will run some tests and output a number of useful items of information on how we can use the tool. Which I will start to breakdown as we continue.

For Kubestr to run, it will use the active context in your kubectl configuration file.

kubestr first run

Green Box – we have our initial checks.

  • Kubernetes version
  • RBAC check
  • Kubernetes Aggregated layer check

Then we have the details from our available storage provisioners that are installed for our cluster. You can see what I have two installed. Continue reading Kubestr – Open-Source Kubernetes Storage benchmarking tool