Quick Tip: How to Reset NSX ALB Controller for a Fresh Configuration

Sometimes NSX ALB controllers are frequently redeployed in the lab environment to test and retest setup. Redeploying an NSX ALB controller only takes a few minutes, but in a slow environment, it can take up to 20-25 minutes. Using this handy tip, you can save some quality time.

To reset a controller node to the default settings, login to the node over SSH and run the following command.

Read More

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Container Service Extension 4.0 on VCD 10.x – Part 4: Tenant Operations

In the previous post in this series, I discussed the CSE configuration options that a service provider can use to provide Kubernetes-as-a-service to their tenants. In this post, I’ll go over how tenants can use the Container Service Extension plugin for Kubernetes cluster deployment in a self-service manner.

If you haven’t read the previous posts in this series, you can do so by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

3: Container Service Extension Configuration by Service Provider

Log in to the tenant’s org to deploy a Kubernetes cluster. The user should be assigned the “Kubernetes Cluster Author” role. To begin with the cluster creation wizard, navigate to Home > More > Kubernetes Container Clusters and click the New button.

Select the Kubernetes runtime for the cluster. CSE 4.0 only supports Tanzu Kubernetes Grid runtime.  

Choose the Kubernetes version and give the Kubernetes cluster a name.Read More

Container Service Extension 4.0 on VCD 10.x – Part 3: Service Provider Configuration

The first two posts in this series covered CSE architecture and NSX ALB deployment/configuration. This post focuses on the steps taken by a service provider to set up a CSE deployment.

You can read the previous posts in this series by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

At this time, it is assumed that the Service Provider has completed the following configurations in VCD:

  • vCenter is registered in VCD.
  • NSX-T is registered in VCD.
  • A Geneve-backed network pool is created in VCD.
  • Provider VDC has been created. 

The service provider workflow for CSE deployment includes the following tasks:

  1. Import Tier-0 gateway/VRF that is created for CSE in NSX-T.
  2. Create an organization in VCD. This is a Service Provider managed organization that hosts the Container Service Extension server and any other extensions in the future. This is known as a Service/Solutions organization.
Read More

Container Service Extension 4.0 on VCD 10.x – Part 2: NSX Advanced Load Balancer Configuration

In part 1 of this blog series, I discussed Container Service Extension 4.0 platform architecture and a high-level overview of a production-grade deployment. This blog post is focused on configuring NSX Advanced Load Balancer and integrating it with VCD. 

I will not go through each and every step of the deployment & configuration as I have already written an article on the same topic in the past. I will discuss the configuration steps that I took to deploy the topology that is shown below.

Let me quickly go over the NSX-T networking setup before getting into the NSX ALB configuration.

I have deployed a new edge cluster on a dedicated vSphere cluster for traffic separation. This edge cluster resides in my compute/workload domain. The NSX-T manager managing the edges is deployed in my management domain. 

On the left side of the architecture, you can see I have one Tier-0 gateway, and VRFs carved out for NSX ALB and CSE networking.Read More

Container Service Extension 4.0 on VCD 10.x – Part 1: Introduction & Architecture

Introduction

VMware Container Service is an extension to VMware Cloud Director which enables cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE helps tenants quickly deploy the Tanzu Kubernetes Grid clusters in their virtual data centers in just a few clicks directly from the tenant portal. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu Mission Control to manage their clusters.

Container Service Extension (CSE) has come a long way and with each release, the product is getting better and better. Folks who have worked on the older versions of CSE knows how painful it the setup process was and involved too many manual steps. With CSE 4.0, the provider workflow is simplified and the installation can be done in less than 30 minutes. Kudos to the CSE engineering team.

CSE 4.0 Benefits

I want to list a few benefits that CSE 4.0 offers before getting into the architecture.Read More

Quick Tip- Cleanup Failed Tasks from SDDC Manager Dashboard in VCF

Tasks in VCF might fail because one or more subtasks within the primary task have failed. Some of these tasks are not retriable and remain in a lingering state in the SDDC Manager dashboard.

The command provided in this blog post will help you in clearing out such tasks from the dashboard.

Step 1: Fetch the failed task ID from the SDDC manager interface.

Click on the failed task and notice the URL change in the browser. The task id is displayed in the URL itself.

Make a note of the task id.

Alternatively, you can run the below API call directly from the SDDC Manager VM.

The output of this API call returns a list of the tasks. You can filter the failed tasks and get the task ID.

Step 2: Delete the failed task

Execute the below API call and it will delete the failed task from the SDDC Manager dashboard.Read More

Tip and Tricks for VCF Lab Deployment

In this post, I’ll go over a few tips/tricks that you may use throughout your VCF lab deployment to get the most out of this fantastic tool.

Tip 1: Bring down lab resource utilization

Most of us, I believe, use VCF as a nested lab, and because VCF requires a lot of computing power, this is one area that we struggle with. Because of the limited resources available, a full-fledged deployment is not always practicable. NSX-T nodes, in my experience, are the most problematic component. VCF deploys several NSX-T nodes and each NSX-T requires a lot of resources. 

You can limit the number of NSX-T nodes in both the management and workload domains by following the below instructions:

Step 1: SSH into SDDC Manager using the vcf user and switch to root user by running the command: su – root

Step 2: Modify application-prod.propertiesRead More

Quick Tip: Cleanup Unused Image Bundles in VCF

I recently downloaded the image bundles for vRealize components while working in my newly deployed VCF 4.4 environment, not realizing that SDDC Manager does not orchestrate the deployment of any vRealize suite component except the vRealize Suite Life-Cycle Manager. I came across a useful out-of-the-box SDDC Manager feature when looking for a way to clean out the unneeded image bundles.

The process outlined in this post will assist you in clearing out any partially downloaded image bundles or unnecessary bundles that SDDC Manager is not currently using. 

Step 1: SSH into SDDC Manager using the vcf user and switch to root user by running the command: su – root

Step 2: Grab the unwanted image bundle id from UI

Step3: Run the following command to clean up the unwanted bundle

where bundle_id is the Id of the unwanted bundle

Example:

Read More

Quick Tip: Deploy VCF Management Domain with Single NSX-T Node

This article will show you how to set up a VCF Management domain with just one NSX-T manager. When there is a resource constraint, such as in a lab environment, this suggestion will be useful for lowering the management domain footprint.

The below steps outline the process of deploying an SDDC with one NSX-T node.

Step 1: Fill in all the parameters in the VCF configuration workbook spreadsheet.

Step 2: Transfer the spreadsheet to the cloud builder VM using WinSCP or a similar utility. 

Step 3: Use the following command to convert the spreadsheet to the json format

Where VCF-4.4.xlsx is the name of my spreadsheet. Change the name of the file to reflect your environment.Read More