Tag Archives: Configure

vRealize Operations Header

How to Add vSphere Tags to vRealize Operations Alert Emails using a Custom Payload

Wondering how to add the vSphere Tag for a virtual machine to emails sent out for alerts? I recently came across this Reddit post, so decided to try out the Custom Payload feature from vRealize (Aria) Operations and want to share the steps I took to achieve this setting.

Here‘s how to configure a Payload Template and Notification to include the vSphere Tag:

Creating the custom payload template to include the vSphere Tag

To get started, within your vRealize Operations interface (SaaS or on-premises), go to:

  • Configure > Alerts
  • Click on Payload Templates icon
  • Click Add to create a new template

vROPS - Custom Payload - Alerts - Payload Templates

  • Give your custom payload template a name,
  • a description,
  • and set which outbound method it’s tied to. For my example, it will be email.
  • Click Next

Continue reading How to Add vSphere Tags to vRealize Operations Alert Emails using a Custom Payload

vRealize Automation Header

vRealize Automation – Active Directory Integration – Configure LDAPS

In this blog post, I am going to cover the setup of the Active Directory integration with vRealize Automation using LDAPS.

Cloud Assembly supports integration with Active Directory servers to provide out of the box creation of computer accounts in a specified Organizational Unit (OU) within an Active Directory server prior to provisioning a virtual machine.

Note: to join to AD within the Guest OS, you can use CloudConfig properties or vSphere CustomizationSpec.

The VMware official documentation doesn’t really call out LDAPS configuration, only LDAP. So after helping a customer configure this, I thought I’d quickly write something up.

To get started, login into vRealize Automation and select Cloud Assembly.

  • Select the Infrastructure Tab
  • Select Integrations under the Connections header
  • Click the Add Integration button
  • Select Active Directory

vRealize Automation - Configure LDAPS - Cloud Assembly - Integrations - Active Directory Continue reading vRealize Automation – Active Directory Integration – Configure LDAPS

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Self-Service DBaaS

In my first blog post, I covered the prerequisites and how to deploy the components for the Data Management for VMware Tanzu platform.

In this blog post we will cover using this new infrastructure for Self-Service Database-as-a-Service deployment and configuration.

Configure Database Templates and Instance Plans
  • Log into your Provider Appliance as the Provider administrator account.

The last actions to configure are publishing the database templates and configuring Instance Plans. (If you have setup your Organisation to use instance plans).

  • Click Templates from the left-hand navigation pane
  • Select your Template which has been sync’d from the Tanzu Net (Or repo as air-gap configuration).
  • Publish the Template

Data Management for Tanzu - Publish Template

  • Click Instance Plans from the left-hand navigation pane
  • Click “Add New Plan”
  • Configure as necessary

Data Management for Tanzu Instance Plans

Configure additional Org Users

Continue reading Data Management for VMware Tanzu – Self-Service DBaaS

Tanzu Blog Logo Header

Data Management for VMware Tanzu – Getting Started

This blog post will cover deploying the infrastructure and components for Data Management for VMware Tanzu.

My second blog post will cover using this infrastructure for Self-Service Database-as-a-Service.

What is Data Management for VMware Tanzu?

Data Management for VMware Tanzu (DMS) is a newly released solution from VMware (July 2021) providing data-as-a-service toolkit for on-demand provisioning & automated management of MySQL and PostreSQL databases on vSphere platforms.

DMS is accessible as both a Graphical UI and via REST API, to meet the needs of administrators and developers and their consumption needs.

With DMS, it provides the ability to create and manage data services through a centralized platform in a self-service fashion, with the following features:

  • Simplified management for admins, acting as a Database fleet management tool; presenting a view of the organization’s database instances running on multi-cloud infrastructure.
  • Database users have the ability to consume self-service capabilities to create new database instances, or to operate on existing instances safely and securely, without requiring infrastructure or database expertise.
  • DMS also provides full automation for provisioning data service instances, backups, security patches, and periodic updates of the data service engine.

Data Management for Tanzu Provider Home Page

Data Management for Tanzu Provider Create Database Page

Understanding the components

DMS is made up of the following architectural components:

  • Provider – this is the core appliance you will deploy, which offers the central UI and API for all users to interact with the Data services and functions. It acts as the control plane to the other components.
  • Agent – These appliances are deployed to extend the control plan into the various vSphere environments, providing a point of presence for provisioning and management operations of the Services deployed.
  • Service – These are photon appliances which host the deployed instance of the data service (database). They communicate with the Agent that deployed them, via a private API. DMS supports the deployment of MySQL and PostgreSQL currently.
  • Template Repo – publishes a set of Data Management for VMware Tanzu Database Templates on Tanzu Network. The provider will poll the Tanzu Network periodically for new templates. There is also a method to handle air-gap environments.

S3 storage is required to be used for several items such as location to store the templates, database configurations and database backups.

Full deployment models for the components can be found here.

Data Management for Tanzu Architecture

Understanding Organisations and User Access

DMS implements the concept of Organisations to provide a logical grouping of users. There are two types:

  • Provider Org – A type of organization to which one or more Provider Administrator user belongs.
    • One provider org can exist in a single DMS installation.
    • This is automatically created during the deployment of the Provider Appliance
    • The Provider Org name is the company name specified at deployment.
  • Agent Org – A type of organization with one or more Organization Administrator or Organization User members.
    • These orgs are created via the DMS UI/API once the Provider appliance has been deployed and can be created at any time.

DMS pre-defines these three user roles:

  • Provider Administrator
    • This is the single Provider Role in the installation
    • Among other tasks, users in this role can import additional Provider Administrator users, create organizations, and create and import organization users
  • Organization Administrator
  • Organization User

The Provider Administrator user will assign a role to each DMS user that they create or import in an organization.

A user that is assigned the Organization Administrator role can manage all services in the organization to which they belong. A user assigned the Organization User can manage only the services that they provision.

More detailed information on the User roles and responsibilities can be found here.

Getting Started

Now first and foremost, I’ll point you towards the official documentation to use as a reference to review alongside this blog post.

Prerequisites

There are always several things to get sorted before you ever dive right in! The official requirements are detailed here, I’m going to call out some of the more finicky pieces you need to be aware of. Continue reading Data Management for VMware Tanzu – Getting Started

Kasten K10 Header

Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift

In this blog post I’m going to cover setting up the Multi-cluster support for Kasten when you’ve installed the software to multiple Kubernetes clusters.

One K10 cluster you have deployed will become the primary node. You will use this node and dashboard interface to access the cluster UI.

  • The primary cluster defines policies and other configuration centrally. Centrally defined policies and configuration can then be distributed to designated clusters to be enacted.

Additional clusters are then added in and are called Secondaries.

  • The secondary clusters receive policies and other configuration from the primary cluster. Once policies are distributed to a secondary, the local K10 installation enacts the policy. This ensures that the policy will continue to be enforced, even if disconnected from the primary.

Pre-Requisites:

  • Authentication
    • Token Authentication must be used
  • Network
    • Secondary K10’s ingress must be accessible by the primary
    • Secondary API Server must be accessible by the primary
  • Run the tool on a bastion host that has connectivity using kubectl to all of the clusters you want to bring together.
Download the K10MultiCluster tool
  • Set the tool as executable
  • Move the tool to your /usr/local/bin/ folder
curl -LJO https://github.com/kastenhq/external-tools/releases/download/4.0.9/k10multicluster_4.0.9_linux_amd64


chmod +x k10multicluster_4.0.9_linux_amd64

sudo mv k10multicluster_4.0.9_linux_amd64 /usr/local/bin/k10multicluster

Download K10Multicluster

Next let’s list out our available clusters we can connect to from our node

kubectl config get-contexts

kubectl config get-contexts

Setup Primary Cluster

Now we are ready to setup our primary cluster by running the following command: Continue reading Configuring Kasten Multi-Cluster Manager across Tanzu and OpenShift