Category Archives: General

Tanzu Blog Logo Header

vSphere with Tanzu – Can I disable DRS?

Can I disable DRS?

No.

Why can’t I disable DRS when Workload Management is enabled?

DRS is a mandatory feature for workload management, the WCP service relies on objects such as Resource Pools to operate.

  • Update – 29th October

The vSphere with Tanzu Documentation has now been updated with this statement.

Caution: Do not disable vSphere DRS after you configure the Supervisor Cluster. Having DRS enabled at all times is a mandatory prerequisite for running workloads on the Supervisor Cluster. Disabling DRS leads to breaking your Tanzu Kubernetes clusters.
What happens if I attempt to disable DRS?

If you disable DRS in a cluster where Workload Management is enabled you will be presented the following message.

The key part of the message below is “the cluster will enter an unrecoverable state.”

The system will let you proceed past this message and disable DRS. DON’T DO IT!

wcp - disable drs message

What if I need to stop VM’s being vMotioned in my cluster?

Keep DRS enabled, and set the DRS mode to manual or Partially Automated.

wcp - drs mode

I really need to disable DRS, what do I do?

Ring VMware Support and discuss with them your need and the situation you find yourself in.

How do I stop my admins accidentality disabling DRS?

This KB article may help, as well as setting appropriate RBAC permissions for anyone accessing your vCenter rather than giving them full administrator rights so they can change settings they shouldn’t.

If you are unsure about any of this, contact VMware Support.

Do you have a fantastic meme to end this blog post with?

Yes.

just because you can doesn't mean you should

Regards

Dean Lewis

Terraform Header

How to Escape Strings in Terraform with a Dollar Sign ($)

The Issue

When using Terraform to perform an action, and the input is using a $, you can end up with an output such as the below.

│ Error: Invalid character
│ 
│  on main.tf line 104, in resource "vra_blueprint" "this":
│ 104:      network: '${resource.Cloud_Network_1.id}'
│ 
│ This character is not used within the language.

This happened to me when I was using the Terraform vRA Provider to create Cloud Templates (blueprints) in my vRA environment. The vRA cloud templates use a syntax such as ${input.something}, which clashes with the syntax used by Terraform to identify inputs.

The Cause

Terraform implements a interpolations syntax. These interpolations are wrapped in ${}, such as ${var.foo}.

The interpolation syntax is powerful and allows you to reference variables, attributes of resources, call functions, etc.

The Fix

You can escape interpolation with double dollar signs: $${foo} will be rendered as a literal ${foo}.

Terraform Interpolation Syntax example

Regards

Dean Lewis

 

vRA 8.0 header

Passing JSON into vRA Code Stream CI Task – MalformedJsonException

The Issue

Whilst working with a vRA Code Stream CI Task, I needed to build a YAML file in my container, which I would use to provide the values to my CLI Tool I was running. Within this YAML File, there is a section of JSON input (yep I know, it’s a Red Hat thing!!!).

I wanted to pass in this JSON section as a vRA variable, as it contains my authentication details to the Red Hat Cloud Website.

So my vRA variable would be as below:

{"auths":{"cloud.openshift.com":{"auth":"token-key","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"token-key","email":"[email protected]"},"registry.redhat.io":{"auth":"token-key","email":"[email protected]"}}}

So my CI Task looked something like this:

cat << EOF > install-config.yaml
apiVersion: v1
baseDomain: simon.local
compute: 
- hyperthreading: Enabled 
  name: worker
  replicas: 1
  platform:
    vsphere: 
      cpus: 4
      coresPerSocket: 1
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
PullSecret: '${var.pullSecret}'
EOF

When running the Pipeline, I kept hitting an issue where the task would fail with a message similar to the below.

com.google.gson.stream.MalformedJsonException: Unterminated array at line 1 column 895 path $[39]
The Cause

This, I believe is because the tasks are passed to the Docker Host running the container via the Docker API using JSON format. The payload then contains my outer wrapping of YAML and within that more JSON. So the system gets confused with the various bits of JSON.

The Fix

To get around this issue, I encoded my JSON data in Base64. Saved this Base64 code to the variable. Then in my CI task I added an additional line before creating the file which creates a environment variable which decodes my base64 provided from a vRA variable.

Below is my new CI Task code.

export pullSecret=$(echo ${var.pullSecret} | base64 -d)

cat << EOF > install-config.yaml
apiVersion: v1
baseDomain: simon.local
compute: 
- hyperthreading: Enabled 
  name: worker
  replicas: 1
  platform:
    vsphere: 
      cpus: 4
      coresPerSocket: 1
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
PullSecret: '$pullSecret'
EOF

 

Regards

vRealize Automation - Code Stream Header

vRA Code Stream – Preserving files and artifacts created in a CI Task

Whilst creating a pipeline and using CI Tasks to run some CLI tools, I needed to save the outputted files from the container used for the CI Task so I could use them once the pipeline is completed.

Code Stream has a feature for CI Tasks called “Preserve Artifacts” to enable this, where by files in your working directory are saved to the “/sharedPath” folder location of the Docker Host where your container runs.

Below I’m going to show you how to use this feature.

  • First on your pipeline configure a Working Directory

Code Stream - Preserve Artifacts - Pipeline - Workspace - Working Directory Continue reading vRA Code Stream – Preserving files and artifacts created in a CI Task

Postman Header

Postman – Logging in results in losing my offline work

The Issue

When working with Postman in an offline mode or not signed in, then choosing to sign in, you lose access to your Collections and Environments you have worked on previously.

The Cause

In later versions, Postman introduce the Scratchpad. This is an offline area where your data is saved to.

When you create a new account in the app, you should be presented an option to move your data from your scratchpad.

If you already have an account to log into, you do not seem to get this option.

The Fix
  • Within Postman application > Click the Settings Cog > Select “Scratch Pad”

Postman - Scratch Pad

So now you should be able to see your offline data. If you can, you need to manually export your data then change back to your workspace and import the data.

Postman - export collection

If you are still unable to find your data. I recommend you follow this article from the Postman support site on “how to recover my data”. I did not personally have much success with this method.

Regards