Veeam Header3

Veeam Backup for Microsoft Azure – Getting Started: Setting up the Infrastructure

In this blog post we will cover the following topics;

- What is Veeam Backup for Azure
- Getting Started
- - Architecture
- - Deploying from Azure Marketplace
- - Logging on for the first time
- - Connecting to your Microsoft Azure Subscriptions and Storage - - Accounts
- - Configuring a repository account
- Deploying worker VMs
- Monitoring
- Protecting your Veeam Backup for Azure Appliance
- Download Logs

The follow up blog posts are;

- Configuring your first Backup Policy
- - How a backup policy works 
- - Creating a Backup Policy 
- - Viewing and Running a Backup Policy
- Restoring a backup
- - Viewing protected data 
- - File Level Recovery 
- - Virtual Machine Disk Restore 
- - Full VM Restore
- Integrating with Veeam Backup and Replication
- - Adding your Azure Repository to Veeam Backup and Replication 
- - Viewing your protected data 
- - What can you do with your data? 
- - - Restore/Recover/Protect

What is Veeam Backup for Azure?

If we look at the Microsoft document “Shared responsibility in the cloud“, we can see the very open comment;

  • Regardless of the type of deployment, the following responsibilities are always retained by you:
    • Data
    • Endpoints
    • Account
    • Access management

So, if you are always responsible for your data, that means you are responsible for protecting it, at both a security and backup point of view.

Veeam Backup for Azure is a turnkey solution that provides you a backup solution which can quickly and securely protect your data, available within the Azure Marketplace itself. Removing the need to spend hours on designing a solution and configuring the software.

Architecture

Veeam Backup for Azure Architecture

There are three main components;

  • Controller Server

A Linux VM deployed into Azure, which runs the Veeam Backup for Azure software.

  • Backup Repositories

Azure blob storage accounts where your Azure VM backups will be saved. The following storage accounts are supported currently;

Veeam Backup for Azure Supported Storage Accounts

Image Source

  • Workers

These are Azure VMs which are deployed automatically or manually by Veeam Backup for Azure server and are used for backing up and restoring the data. There is the capability to scale up and scale down the number of workers as needed.

The Azure region that worker VMs are deployed to, depend on the storage account they are linked to.

Each worker can process a single VM at a time, if a worker is idle for 10 minutes or more, then it is decommissioned (when setup to auto scale). Worker VMs, run the following services; A Worker service, which is responsible for fetching data from Azure; File-level recovery service, used for mounting data from a backup to the worker VM to initiate file-level recovery.

Veeam Backup for Azure backup process 1

(Image Source)

Deploy Veeam Backup for Azure from the Azure Marketplace

The options to access the solution, which is driven via a web portal;

  • Direct via Public IP address
    • I recommend setting up firewall rules if you do this
  • Accessing the portal via a private IP address via the use of a VPN or Azure Express route.
    • If you need a VPN solution, check out VeeamPN.
    • This removes the need to publicly expose the solution.

Veeam Azure Deploy from Marketplace Complete

Logging into the Veeam Backup for Azure Console

Your first login, you’ll provide the username and password configured during the deployment from the marketplace.

In my example, I will be using the publicly assigned IP address to log into the Portal UI. Upon first logon you will need to accept the EULA.

Veeam Azure First UI logon Veeam Azure Accept EULA

The interface is heavily wizard driven, which makes it simple to use and consume as a solution. If you’ve used Veeam Availability Orchestrator in the past, you’ll recognise similarities with the interface.

Logging into the solution for the first time, you’ll see this getting started screen, which makes it easy to understand how to operationalise the solution and start protecting your data.

Veeam Azure First Logon Getting started

Connecting to your Microsoft Azure Subscriptions and Storage Accounts

From the getting started page, we’ll click the first task to connect our Veeam Backup for Azure solution to our Microsoft Azure platform, which takes us to the screen shown below.

Veeam Azure Getting started Add Microsoft Azure Account Continue reading Veeam Backup for Microsoft Azure – Getting Started: Setting up the Infrastructure

Veeam Backup for Azure Header 605

Veeam Backup for Azure – Service Endpoint from virtual network to Microsoft.Storage doesn’t exist

The Issue

After deploying a new Veeam Backup for Azure setup via the Microsoft Azure Marketplace, I was going through the configuration and when deploying my worker instances, which are used for performing the backup of the virtual machines.

I hit the following error;

The Service Endpoint from virtual network {VNET} and subnet {Name} to Microsoft.Storage doesn't exist.

Veeam Backup for Azure Worker Service Endpoint Microsoft.Storage doesnt

The Cause

This was caused as I was using an existing storage account in a seperate resource group which I created manually. Which meant the pre-reqs were not met.

The Fix

Quick and easy fix, log into your Azure Portal, browse to your storage account where you are deploying the Worker Instances.

  • Click on “Firewalls and Virtual Networks”
  • Select “Selected Networks”
    • This is recommended from a security perspective
  • Click “Add existing virtual network”
  • Input the details of the virtual network to be used by the Worker Instances
  • Click the “Enable” button
    • This will enable the Service Endpoint on your selected network

Veeam Backup for Azure Worker Configure Service Endpoint

Once the Service Endpoint is enabled, you see see a status message in the green text box highlighted, and the status changed to enabled.

  • Click the “Add” button
  • And remember to click save on the “Firewalls and virtual networks” pane.

Veeam Backup for Azure Worker Configure Service Endpoint 2

Going back to your Veeam Backup for Azure portal, you can click “Check Again” on the Worker Configuration Status, and you should see this is successful.

Veeam Backup for Azure Worker Service Endpoint Microsoft.Storage Fixed

 

Regards

Kubernetes

Kubernetes command line: tips and tricks

In this blog post, I have collected together a number of tips, tricks and snippets I’ve learned along the away whilst learning Kubernetes.

- Configure tab completion
- Selecting all namespaces in commands
- Restarting nodes
- Setting default storage class
- Resource usage
- Delete pods that are stuck terminating
- Using the watch command
- Troubleshooting
- - Run an interactive pod for debugging issues
- - - Alpine & BusyBox
- - Check etcd is running on master nodes
- - Get deployed pod image
- - Get Kubelet Service Logs
- - Get events from all namespaces, sorted by creation time
- Other Resources
- - Visual guide on troubleshooting Kubernetes deployments
- - Tool: Stern for tailing multiple Kubernetes objects logs
- - Useful Aliases to create for managing Kubernetes

I would also highly recommend the awesome Kubectl Cheat Sheet to be one of your go to references.

Configure Tab completion
source <(kubectl completion bash)
Selecting all name spaces in commands

rather than using “–all-namespaces” you can use “-A”

kubectl get pods --all-namespaces

kubectl get pods -A
Restarting Nodes

SSH to problematic node and run

/etc/init.d/kubelet restart

Source

Setting default storage class

Remove default storage class setting

kubectl patch storageclass {SC_NAME} -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Configure storage class as default

kubectl patch storageclass {SC_NAME} -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Source

Resource Usage

Requires metrics-server to be installed and running (github)

Pods;

#Check what pods are using the most memory in the cluster:
kubectl top pod --all-namespaces  | sort -rnk4 | head -40
 
#Check what pods are using the most CPU in the cluster:
kubectl top pod --all-namespaces  | sort -rnk3 | head -80

Nodes;

#Check which nodes are using the most memory in the cluster:
kubectl top nodes --all-namespaces  | sort -rnk4 | head -40
 
#Check which nodes are using the most CPU in the cluster:
kubectl top nodes --all-namespaces  | sort -rnk3 | head -80

Verify Kubelet is exposing Node metrics;

kubectl get --raw /api/v1/nodes/{Node_Name}/proxy/stats/summary

To get kube-metrics working I had to add the following to the deployment. (Highlighted in bold).

kubectl edit deployment metrics-server -n kube-system
#############
name: metrics-server
spec:
containers:
- args:
 - --kubelet-preferred-address-types=InternalIP
 - --kubelet-insecure-tls

kube metrics kubelet insecure tls kubelet preferred address types

Delete pods that are stuck terminating
kubectl get pods --all-namespaces | grep Terminating | while read line; do pod_name=$(echo $line | awk '{print $2}') && name_space=$(echo $line | awk '{print $1}' ); kubectl delete pods $pod_name -n $name_space --grace-period=0 --force ; done
Using the Watch command

Really simple one, but when deploying things, sometimes you don’t the feedback you need from the system. However using the Linux watch command infront of your Kubernetes commands, you can;

watch -n 2 kubectl get pods -n {namespace}

In the above example, this command will refresh your page every 2 seconds and list out the available pods and status.

Troubleshooting:
Run an interactive pod for debugging

This will create a pod of one of the below images, which will be removed when you exit out of the session.

Apline;

kubectl run -i --rm -t alpine-$USER --image=alpine --restart=Never -- /bin/sh

Press enter

BusyBox

kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh

Press enter

Source

Kubectl create apline image troubleshooting

Check etcd is running on master nodes

Check etcd pods have been created by Kubelet

sudo crictl pods --name=etcd-member

or 

sudo crictl ps -A

Check etcd logs on master nodes

sudo crictl logs $(sudo crictl ps --pod=$(sudo crictl pods --name=etcd-member --quiet) --quiet)

Source

Get pod deployed image
Kubectl get pod {name} -n {namespace} -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}"

Example: 

root@k8s-master# kubectl get pods nginx -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}"

nginx map[running:map[startedAt:2020-06-10T15:44:40Z]] nginx:latest

Get Kubelet Service logs

SSH to your node and run the following

journalctl -f -u kubelet.service
Get events from all namespaces, sorted by creation time
kubectl get events -A  --sort-by='.metadata.creationTimestamp'
Other Resources

A visual guide on troubleshooting Kubernetes deployments

Tool: Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is colour coded for quicker debugging.

This can be more useful than the Kubectl logs command, which you need to know your individual pods name.

Tail logs of all pods of the deployment/service
 CMD: stern -n {Namespace} {deployment}
 
Same as above but starting with logs in the last minute
 CMD: stern -n {Namespace} {deployment} -s 1m

Useful Alias, can be used without ZSH

Regards

Tanzu Mission Control Header

VMware Tanzu Mission Control – Getting started with your first cluster

In this blog post we will cover the following topics

- What is Tanzu Mission Control?
- So, this isn't just for VMware environments?
- Getting Started Tanzu Mission Control
- - TMC Resource Hierarchy
- - Creating a Cluster Group
- - Attaching a cluster to Tanzu Mission Control
- - Viewing your Cluster Objects
- - - Overview
- - - Nodes
- - - Namespaces
- - - Workloads
- Where can I demo/test/trial this myself?

The follow up blog posts are;

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

What is Tanzu Mission Control?

Tanzu Mission control is a cloud offering, which gives you a single point of control, monitoring and management, regardless of the Kubernetes deployment and their location (e.g Tanzu Kubernetes Grid, OpenShift Container Platform, Azure Kubernetes to name but a few).

Key Capabilities;

  • Manage Kubernetes Cluster Lifecycle through the deployment and day 2 operations
  • Attach Clusters for centralized operations and management
  • Centralized policy management
    • Apply access, network and container registry policies consistently across your Kubernetes clusters and namespaces
  • Global visibility for diagnosing and troubleshooting issues with your Kubernetes clusters
  • Inspection runbooks to validate the configuration of your clusters
    • Current offerings are;
      • Conformance; validating binaries running in your cluster to ensure proper configuration and running.
      • CIS benchmark; evaluation against the CIS Benchmark for Kubernetes published by the Center for Internet Security.
      • Lite; node conformance test to validate your nodes meet the Kubernetes requirements.

So, this isn’t just for VMware environments?

Nope, this is a cloud and Kubernetes neutral offering. You can attach CNCF conformant Kubernetes clusters to Tanzu Mission Control no matter where they are running: on vSphere, in any public clouds, or through other Kubernetes vendors.

Getting Started Tanzu Mission Control

TMC Resource Hierarchy

In the Tanzu Mission Control resource hierarchy, there are three levels at which you can specify policies.

  • Organization
  • Object groups (Cluster groups and Workspaces)
  • Kubernetes objects (Clusters and Namespaces)

You can set direct policies for a given object, but each object can also inherit based on the parent objects. So pretty much what you’ve been used to in the past with policies and hierarchies.

Creating a Cluster Group

A Cluster Group is a logical object to bring together multiple Kubernetes clusters. You can set user access policies to be able to view/edit/control cluster group objects and their child objects (clusters).

Cluster groups provide an infrastructure view, and all clusters must be attached to a group.

To create a Cluster Group;

  • Select the Cluster Group from the navigation
  • Click New Cluster Group
  • Supply a name, description and labels are optional and can be edited after creation

Tanzu Mission Control Create Clusters Group

Tanzu Mission Control New Cluster Group Continue reading VMware Tanzu Mission Control – Getting started with your first cluster

Tanzu Mission Control Header

VMware Tanzu Mission Control – Workspaces and Policies

In this blog post we will cover the following topics

- Tanzu Mission Control 
- - Workspaces 
- - - Creating a workspace
- - - Creating a managed Namespace
- - - Viewing a managed Namespace
- - Policy Driven Cluster Management
- - - Creating a Image Registry Policy
- - - Creating a Network Policy

The follow up blog posts are;

Tanzu Mission Control 
- Getting Started Tanzu Mission Control 
- Cluster Inspections 
- Workspaces and Policies  
- Data Protection 
- Deploying TKG clusters to AWS 
- Upgrading a provisioned cluster 
- Delete a provisioned cluster 
- TKG Management support and provisioning new clusters
- TMC REST API - Postman Collection
- Using custom policies to ensure Kasten protects a deployed application

Workspaces

Workspaces provide an application view, where you logically group Kubernetes Namespaces together, regardless of the cluster to which they are attached.

This is in contrast to Cluster Groups, which are focused on the infrastructure view.

These Workspaces can be created to align to your projects or applications, from a hierarchy point of view, you would then authorize your users to these Workspaces, so that they can monitor and manage the namespaces related to their function.

Creating a Workspace

Click the Workspace navigation view on the left-hand side, and then New Workspace.

Tanzu Mission Control New Workspace

Specify your Workspace name, and provide the optional description and labels, these can be added after creation if needed.

Tanzu Mission Control New Workspace Creation

Now you have a Workspace, it’s no good without any associated Namespaces, so let’s continue.

Creating a managed Namespace

All Namespaces attached to a Workspace will be managed Namespaces under TMC.

To create a managed Namespace, you can do this in one of four places;

  • Within the Workspace Navigation view
  • Inside the Workspace Object itself
  • On the Namespace Navigation view
  • On the Cluster Object > Navigation Tab

Continue reading VMware Tanzu Mission Control – Workspaces and Policies