Category Archives: Kubernetes

MongoDB + Kubernetes Header

MongoDB Container data loss issue – A Journey

Over the past month or so I noticed an issue with my Pac-Man Kubernetes application, which I use for demonstrations as a basic app front-end that writes to a database back end, running in Kubernetes.

  • When I restored my instances using Kasten, my Pac-Man high scores were missing.
  • This issue happened when I made some changes to my deployment files to configure authentication to the MongoDB using environment variables in my deployment file.

This blog post is a detail walk-through of the steps I took to troubleshoot the issue, and then rectify it!

Summary if you don’t want to read the post

If you are not looking to read through this blog post, here is the summary:

  • I changed MongoDB images, I needed to configure a new mount point location to match the MongoDB configuration
  • New MongoDB image is non-root, so had to use an Init container to configure the permissions on the PV first
Overview of the application

The application is made up of the following components:

  • Namespace
  • Deployment
    • MongoDB Pod
      • DB Authentication configured
      • Attached to a PVC
    • Pac-Man Pod
      • Nodejs web front end that connects back to the MongoDB Pod by looking for the Pod DNS address internally.
  • RBAC Configuration for Pod Security and Service Account
  • Secret which holds the data for the MongoDB Usernames and Passwords to be configured
  • Service
    • Type: LoadBalancer
      • Used to balance traffic to the Pac-Man Pods

Pac-Man Kubernetes Diagram

Confirming the behaviour

The behaviour I was seeing when my application was deployed:

  • Pac-Man web page – I could save a high score, and it would show in the high scores list
    • This showed the connectivity to the database was working, as the app would hang if it could not write to the database.
  • I would protect my application using Kasten. When I deleted the namespace, and restored everything, my application would be running, but there was no high scores to show.
  • This was apparent from deploying the branch version v0.5.0 and v0.5.1 from my GitHub.
  • Deploying the branch v0.2.0 would not product the same behaviour
    • This configuration did not have any database authentication setup, meaning MongoDB was open to the world if they could connect without a UN/Password.
Testing the Behaviour

Continue reading MongoDB Container data loss issue – A Journey

Kasten K10 Header

Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

In this blog post I’m going to cover setting up the Multi-cluster support for Kasten when you’ve installed the software to multiple Kubernetes clusters.

One K10 cluster you have deployed will become the primary node. You will use this node and dashboard interface to access the cluster UI.

  • The primary cluster defines policies and other configuration centrally. Centrally defined policies and configuration can then be distributed to designated clusters to be enacted.

Additional clusters are then added in and are called Secondaries.

  • The secondary clusters receive policies and other configuration from the primary cluster. Once policies are distributed to a secondary, the local K10 installation enacts the policy. This ensures that the policy will continue to be enforced, even if disconnected from the primary.

Pre-Requisites:

  • Authentication
    • Token Authentication must be used
  • Network
    • Secondary K10’s ingress must be accessible by the primary
    • Secondary API Server must be accessible by the primary
  • Run the tool on a bastion host that has connectivity using kubectl to all of the clusters you want to bring together.
Download the K10MultiCluster tool
  • Set the tool as executable
  • Move the tool to your /usr/local/bin/ folder
curl -LJO https://github.com/kastenhq/external-tools/releases/download/4.0.9/k10multicluster_4.0.9_linux_amd64


chmod +x k10multicluster_4.0.9_linux_amd64

sudo mv k10multicluster_4.0.9_linux_amd64 /usr/local/bin/k10multicluster

Download K10Multicluster

Next let’s list out our available clusters we can connect to from our node

kubectl config get-contexts

kubectl config get-contexts

Setup Primary Cluster

Now we are ready to setup our primary cluster by running the following command: Continue reading Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

Tanzu Mission Control Header

Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

The Issue

A while ago I was chatting to Michael Cade, and we pondered the question “How do we ensure Kasten is protecting a newly deployed application in our Kubernetes environment”.

We chatted about how one of the best ways to make your Kasten protection policy flexible is by using metadata labels.

We came up with the simple idea: “What if something forces a known label on the metadata of any applications deployed by our developers in the future?”

This blog post covers this use case using Tanzu Mission Control with custom policies.

The Solution

One of the products we can use to enforce labels on a Kubernetes resource is Open Policy Agent Gatekeeper. Which is an external admission controller which allows you to create policies for the admission of resource creation/changes/updates based on a criteria.

  • OPA policies are expressed in a high-level declarative language called Rego. (Pronounced “ray-go”.)

Tanzu Mission Control, the fleet management SaaS tool for managing your Kubernetes platforms, provides you the ability to create policies of various types to manage the operation and security posture of your Kubernetes clusters and other organizational objects, implemented by using the OPA Gatekeeper.

Implementing The Solution

For this solution “art of the possible” blog post, we are going to keep it really simple, and implement a policy which covers the following: Continue reading Tanzu Mission Control – Using custom policies to ensure Kasten protects a deployed application

vRA OpenShift Tanzu Mission Control Header

Deploying OpenShift clusters (IPI) using vRA Code Stream

This walk-through will detail the technical configurations for using vRA Code Stream to deploy Red Hat OpenShift Clusters, register them as Kubernetes endpoints in vRA Cloud Assembly and Code Stream, and finally register the newly created cluster in Tanzu Mission Control.

The deployment uses the Installer Provisioned Infrastructure method for deploying OpenShift to vSphere. Which means the installation tool “openshift-install” provisions the virtual machines and configures them for you, with the cluster using internal load balancing for it’s API interfaces.

This post mirrors my original blog post on using vRA to deploy AWS EKS clusters.

Pre-reqs
  • Red Hat Cloud Account
    • With the ability to download and use a Pull Secret for creating OpenShift Clusters
  • vRA access to create Code Stream Pipelines and associated objects inside the pipeline when it runs.
    • Get CSP API access token for vRA Cloud or on-premises edition.
  • Tanzu Mission Control access with ability to attach new clusters
    • Get an CSP API access token for TMC
  • vRA Code Stream configured with an available Docker Host that can connect to the network you will deploy the OpenShift clusters to.
    • This Docker container is used for the pipeline
    • You can find the Dockerfile here, and alter per your needs, including which versions of OpenShift you want to deploy.
  • SSH Key for a bastion host access to your OpenShift nodes.
  • vCenter account with appropriate permissions to deploy OpenShift
  • DNS records created for OpenShift Cluster
    • api.{cluster_id}.{base_domain}
    • *.apps.{cluster_id}.{base_domain}
  • Files to create the pipeline are stored in either of these locations:
High Level Steps of this Pipeline
  • Create an OpenShift Cluster
    • Build a install-config.yaml file to be used by the OpenShift-Install command line tool
    • Create cluster based on number of user provided inputs and vRA Variables
  • Register OpenShift Cluster with vRA
    • Create a service account on the cluster
    • collect details of the cluster
    • Register cluster as Kubernetes endpoint for Cloud Assembly and Code Stream using the vRA API
  • Register OpenShift Cluster with Tanzu Mission Control
    • Using the API
Creating a Code Stream Pipeline to deploy a OpenShift Cluster and register the endpoints with vRA and Tanzu Mission Control
Create the variables to be used

First, we will create several variables in Code Stream, you could change the pipeline tasks to use inputs instead if you wanted. Continue reading Deploying OpenShift clusters (IPI) using vRA Code Stream