If I now look at my pods in my namespace “Kasten-IO” I can see they are being recreated as the deployment artifacts will have been updated with the new configuration including container images.
And finally looking back at my Kasten Dashboard for the cluster information, I can see I am now at the latest version.
In this blog post we will cover the following topics
- Data Protection Overview
- Create a AWS Data Protection Credential
- Enable Data Protection on a Cluster
- Running a backup manually or via an automatic schedule
- Restoring your data
The follow up blog posts are;
- Tanzu Mission Control
- - Getting Started with TMC
- - - What is Tanzu Mission Control?
- - - Creating a Cluster Group
- - - Attaching a cluster to Tanzu Mission Control
- - - Viewing your Cluster Objects
- - -Where can I demo/test/trial this myself?
- - Cluster Inspections
- - - What Inspections are available
- - - Performing Inspections
- - - Viewing Inspections
- - Workspaces and Policies
- - - Creating a workspace
- - - Creating a managed Namespace
- - - Policy Driven Cluster Management
- - - Creating Policies
TMC Data Protection Overview
Tanzu Mission Control implements data protection through the inclusion of the Project Velero, this tool is not enabled by default. This blog post will take you through the setup.
Data is stored externally to a AWS location, with volume backups remaining as part of the cluster where you’ve connected TMC.
Currently there is no ability to backup and restore data between Kubernetes clusters managed by TMC.
Create a AWS Data Protection Credential
First we need to create a AWS data protection credential, so that TMC can configure Velero within your cluster to save the data externally to AWS.
If you are looking for supported options for protecting data to other locations, I recommend you either look at deploying Project Velero manually outside of TMC (losing access to the data protection features in the UI) or look at another enterprise service such as Kasten.io.
On the Administration screen, click Accounts, and Create Account Credential.
Select > AWS data protection credential
Set your account name for easy identification and click to generate template and save this file to your machine.
The next steps will require configuration in the AWS console to create resources using CloudFormation so that Project Velero can export data to AWS. Here is the official VMware documentation on this configuration.
In the AWS Console, go to the CloudFormation service
Click to create a new stack
Click “Template is ready” as we will provide our template file from earlier.
Click to upload a template file
Select the file from your machine
Click next
Provide a stack name and click next
Ignore all the items on this page and click next
Review your configuration and click finish.
Once you’ve reviewed and clicked create/finish. You will be taken into the Stack itself.
You can click the Events tab and the refresh button to see the progress.
Just a quick blog on how to get the Data Protection feature of Tanzu Mission Control on Red Hat OpenShift. By default you will find that once the data protection feature is enabled, the pods for Restic component of Velero error.
Enable the Data Protection Feature on your Openshift cluster
You will see the UI change to show it’s enabling the feature.
You will see the Velero namespace created in your cluster.
However the “Data Protection is being enabled” message in the TMC UI will continue to show without user intervention. If you show the pods for the Velero namespace you will see they error.
This is because OpenShift has a higher security context out of the box for containers than a vanilla Kubernetes environment.
The steps to resolve this are the same for a native install of the Project Velero opensource install to your cluster.
First we need to add the velero service account to the privileged SCC.
After this, if we run the command to get all pods under the Velero namespace again, we’ll see that they are replaced with the new configuration and running.
Going back to our TMC Console, we’ll see the Data Protection feature is now enabled.
In this blog post we will cover how to configure Red Hat OpenShift to forward logs from the ClusterLogging instance to an external 3rd party system, in this case, VMware vRealize Log Insight Cloud.
Architecture
The Openshift Cluster Logging will have to be configured for accessing the logs and forwarding to 3rd party logging tools. You can deploy the full suite;
Visualization: Kibana
Collection: FluentD
Log Store: Elasticsearch
Curation: Curator
However, to ship the logs to an external system, you will only need to configure the FluentD service.
To forward the logs from the internal trusted services, we will use the new Log Forwarding API, which is GA in OpenShift 4.6 and later (it was a tech preview in earlier releases, and the configuration YAMLs are slightly different, so read the relevant documentation version).
This setup will provide us the architecture below. We will deploy the trusted namespace “OpenShift-Logging” and use the Operator to provide a Log Forwarding API configuration which sends the logs to a 3rd party service.
For vRealize Log Insight Cloud, we will run a standalone FluentD instance inside of the cluster to forward to the cloud service.
The log types are one of the following:
application. Container logs generated by user applications running in the cluster, except infrastructure container applications.
infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.
audit. Logs generated by the node audit system (auditd) and the audit logs from the Kubernetes API server and the OpenShift API server.
Prerequisites
VMware vRealize Log Insight Cloud instance setup with Administrator access.
Red Hat OpenShift Cluster deployed
with outbound connectivity for containers
Download this Github Repository for the configuration files
Deploy the standalone FluentD instance to forward logs to vRealize Log Insight Cloud
As per the above diagram, we’ll create a namespace and deploy a FluentD service inside the cluster, this will handle the logs forwarded from the OpenShift Logging instance and send the to the Log Insight Cloud instance.
Creating a vRealize Log Insight Cloud API Key
First, we will create an API key for sending data to our cloud instance.
Expand Configuration on the left-hand navigation pane
Select “API Keys”
Click the “New API Key” button
Give your API key a suitable name and click Create.
After building a brand new OpenShift 4.6.9 cluster, I noticed one of the pods was not running correctly
oc get pods -n openshift-monitoring
.....
NAME READY STATUS RESTARTS AGE
cluster-monitoring-operator-f85f7bcb5-84jw5 1/2 CreateContainerConfigError 0 112m
Upon inspection of the pod;
oc describe pod cluster-monitoring-operator-XXX -n openshift-
monitoring
I could see the following error message;
Error: container has runAsNonRoot and image has non-numeric user
(nobody), cannot verify user is non-root
The Cause
There is a Red Hat article about this, but it is gated. The reason is cluster-monitoring-operator gets wrongly the non-root SCC assigned.
The Fix
Currently there is no permanent provided fix from Red Hat, but you can track this bug.
However the workaround is to simply delete the pod. This should recreate and load with the correct permissions.