Tag Archives: Install

vSphere Kubernetes Drivers Operator - Red Hat OpenShift - Header

Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

What is the vSphere Kubernetes Driver Operator (VDO)?

This Kubernetes Operator has been designed and created as part of the VMware and IBM Joint Innovation Labs program. We also talked about this at VMworld 2021 in a joint session with IBM and Red Hat. With the aim of simplifying the deployment and lifecycle of VMware Storage and Networking Kubernetes driver plugins on any Kubernetes platform, including Red Hat OpenShift.

This vSphere Kubernetes Driver Operator (VDO) exposes custom resources to configure the CSI and CNS drivers, and using Go Lang based CLI tool, introduces validation and error checking as well. Making it simple for the Kubernetes Operator to deploy and configure.

The Kubernetes Operator currently covers the following existing CPI, CSI and CNI drivers, which are separately maintained projects found on GitHub.

This operator will remain CNI agnostic, therefore CNI management will not be included, and for example Antrea already has an operator.

Below is the high level architecture, you can read a more detailed deep dive here.

vSphere Kubernetes Drivers Operator - Architecture Topology

Installation Methods

You have two main installation methods, which will also affect the pre-requisites below.

If using Red Hat OpenShift, you can install the Operator via Operator Hub as this is a certified Red Hat Operator. You can also configure the CPI and CSI driver installations via the UI as well.

  • Supported for OpenShift 4.9 currently.

Alternatively, you can install the manual way and use the vdoctl cli tool, this method would also be your route if using a Vanilla Kubernetes installation.

This blog post will cover the UI method using Operator Hub.

Pre-requisites

Continue reading Using the new vSphere Kubernetes Driver Operator with Red Hat OpenShift via Operator Hub

Helm Pac-Man Header

Creating and hosting a Helm Chart package to install Pac-Man on Kubernetes

If you’ve have been following any of my blogs that relate to Kubernetes, I am sure that you will have seen the use of my demo application Pac-Man, designed to replicate a small production application with a front end UI service, DB back end service and load balancing service.

If not, you can find it here:

In this blog post, I am going to cover how I create a Helm Chart package to install the application on a Kubernetes cluster, and then host it on GitHub so that it can be re-used as necessary between different clusters.

This was on my to-do list for quite a while, as I wanted to explore Helm in more detail and understand how the charts work. What better way to do this than create my own?

What is Helm and why use it?

Helm is a tool that simplifies the installation and lifecycle of Kubernetes applications. As an example, it is a little bit like Brew or Yum for Linux.

Helm uses a package format called charts; these charts are a collection of files that describe a related set of Kubernetes resources. These charts can range from the simple, deploy a single pod, deployment set, etc, to the complex, deploy a full application made up of Deployments, StatefulSets, PVCs, Ingress, etc.

Helm has become over the years one of the defacto client tools to use for simplification of deploying an application to your Kubernetes environment. Take Kasten for example, to deploy their K10 software, their guide gives you only the Helm commands to do so.

You can install Helm from the below script, for other methods please see their official documentation.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

chmod 700 get_helm.sh

./get_helm.sh
Creating a template Helm Chart

The Helm Client makes it easy to get started from scratch, you can create a template chart by running the following command, which creates a folder of the name you specify, with a number of example files you can use.

helm create {name}

# For this blog post I ran the following

helm create pacman-kubernetes

helm create pacman-kubernetes

In more detail, this structure offers the following: Continue reading Creating and hosting a Helm Chart package to install Pac-Man on Kubernetes

Tanzu Blog Logo Header

First Look – Setup Tanzu Build Services and rebuilding Pac-Man

This blog post will detail how to setup Tanzu Build Services in a test environment, and then create a container image from a dockerfile, fixing several vulnerabilities compared to the current container image.

What is Tanzu Build Service?
Tanzu Build Service uses the open-source Cloud Native Buildpacks project to turn application source code into container images. 

Build Service executes reproducible builds that align with modern container standards, and additionally keeps image resources up-to-date. It does so by leveraging Kubernetes infrastructure with kpack, a Cloud Native Buildpacks Platform, to orchestrate the image lifecycle. 

Build Service helps you develop and automate containerized software workflows securely and at scale.

You can read more about the Tanzu Build Services concepts here.

Pre-Reqs

Have an accessible Image Registry to both your local client and your Kubernetes cluster.

  • I used Dockerhub for my lab environment.

Install the Carvel tools.

  • kapp is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • ytt is a templating tool that understands YAML structure.
  • kbld is needed to map relocated images to k8s config.
  • imgpkg is tool that relocates container images and pulls the release configuration files.
brew tap vmware-tanzu/carvel

brew install ytt kbld kapp imgpkg kwt vendir

Install the kp cli tool.

# Download from the Tanzu Network pages

chmod +x kp-linux-0.4.0 
sudo mv kp-linux-0.4.0 /usr/bin/local/kp

# Install using Brew

brew tap vmware-tanzu/kpack-cli
brew install kp

# Download from GitHub Releases Page
curl -LJO https://github.com/vmware-tanzu/kpack-cli/releases/download/v0.4.2/kp-linux-0.4.2
chmod +x kp-linux-0.4.2
sudo mv kp-linux-0.4.2 /usr/bin/local/kp
Installing Tanzu Build Services

Log in to your registry that will host the Build Services containers and be used by your Kubernetes cluster Continue reading First Look – Setup Tanzu Build Services and rebuilding Pac-Man

Kasten K10 Header

Kasten K10 – Air gap installation using Harbor Image Registry

In this blog post, I will cover the steps for an air-gap installation for Kasten K10. For situations where your Kubernetes cluster doesn’t have available internet access to pull down the container images directly from their online locations.

Pre-requisites
  • Image Registry that is accessible by your Kubernetes cluster
  • Client that has access to download the container images and then to the Image Registry
    • In this example, I am using my local machine which has docker installed.
  • Helm downloaded
    • Run the following to get the helm files locally for the install.
helm repo update && \
    helm fetch kasten/k10 --version=<k10-version>

Example for Kasten K10 4.5.0

helm repo update && \ 
    helm fetch kasten/k10 --version=4.5.0

This will download a file, for example "k10-4.5.0.tgz"
Log into your Image Registry

First you need to ensure that your docker client (or similar) has authenticated to your Image Registry which your air-gap Kubernetes cluster can access.

When using Harbor and Docker, I typically use this method with a robot account for programmatic access.

However, when running the Kasten tooling which we’ll discuss next, I kept hitting an error. Continue reading Kasten K10 – Air gap installation using Harbor Image Registry

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Getting Started

This blog post will cover deploying the infrastructure and components for Data Management for VMware Tanzu.

My second blog post will cover using this infrastructure for Self-Service Database-as-a-Service.

What is Data Management for VMware Tanzu?

Data Management for VMware Tanzu (DMS) is a newly released solution from VMware (July 2021) providing data-as-a-service toolkit for on-demand provisioning & automated management of MySQL and PostreSQL databases on vSphere platforms.

DMS is accessible as both a Graphical UI and via REST API, to meet the needs of administrators and developers and their consumption needs.

With DMS, it provides the ability to create and manage data services through a centralized platform in a self-service fashion, with the following features:

  • Simplified management for admins, acting as a Database fleet management tool; presenting a view of the organization’s database instances running on multi-cloud infrastructure.
  • Database users have the ability to consume self-service capabilities to create new database instances, or to operate on existing instances safely and securely, without requiring infrastructure or database expertise.
  • DMS also provides full automation for provisioning data service instances, backups, security patches, and periodic updates of the data service engine.

Data Management for Tanzu Provider Home Page

Data Management for Tanzu Provider Create Database Page

Understanding the components

DMS is made up of the following architectural components:

  • Provider – this is the core appliance you will deploy, which offers the central UI and API for all users to interact with the Data services and functions. It acts as the control plane to the other components.
  • Agent – These appliances are deployed to extend the control plan into the various vSphere environments, providing a point of presence for provisioning and management operations of the Services deployed.
  • Service – These are photon appliances which host the deployed instance of the data service (database). They communicate with the Agent that deployed them, via a private API. DMS supports the deployment of MySQL and PostgreSQL currently.
  • Template Repo – publishes a set of Data Management for VMware Tanzu Database Templates on Tanzu Network. The provider will poll the Tanzu Network periodically for new templates. There is also a method to handle air-gap environments.

S3 storage is required to be used for several items such as location to store the templates, database configurations and database backups.

Full deployment models for the components can be found here.

Data Management for Tanzu Architecture

Understanding Organisations and User Access

DMS implements the concept of Organisations to provide a logical grouping of users. There are two types:

  • Provider Org – A type of organization to which one or more Provider Administrator user belongs.
    • One provider org can exist in a single DMS installation.
    • This is automatically created during the deployment of the Provider Appliance
    • The Provider Org name is the company name specified at deployment.
  • Agent Org – A type of organization with one or more Organization Administrator or Organization User members.
    • These orgs are created via the DMS UI/API once the Provider appliance has been deployed and can be created at any time.

DMS pre-defines these three user roles:

  • Provider Administrator
    • This is the single Provider Role in the installation
    • Among other tasks, users in this role can import additional Provider Administrator users, create organizations, and create and import organization users
  • Organization Administrator
  • Organization User

The Provider Administrator user will assign a role to each DMS user that they create or import in an organization.

A user that is assigned the Organization Administrator role can manage all services in the organization to which they belong. A user assigned the Organization User can manage only the services that they provision.

More detailed information on the User roles and responsibilities can be found here.

Getting Started

Now first and foremost, I’ll point you towards the official documentation to use as a reference to review alongside this blog post.

Prerequisites

There are always several things to get sorted before you ever dive right in! The official requirements are detailed here, I’m going to call out some of the more finicky pieces you need to be aware of. Continue reading Data Management for VMware Tanzu – Getting Started