Global Load Balancing using NSX ALB in VMC

Overview

Global Server Load Balancing (GSLB) is the method of load balancing applications/workloads that are distributed globally (typically, multiple data centers and public clouds). GSLB enables the efficient distribution of traffic across geographically dispersed application servers. 

In a production environment, the corporate name server delegates one or more subdomains to Avi Load Balancer GSLB, which then owns these domains and responds to DNS queries from clients. DNS-based load balancing is implemented by creating a DNS Virtual Service. 

How GSLB Works?

Let’s understand the working of GSLB using the example. 

2 SDDCs are running in VMC, and both SDDC has local load balancing configured to load balance set of web servers in their respective SDDC. The 2 Virtual Services (SDDC01-Web-VS & SDDC02-Web-VS) have a couple of web servers as pool members and the VIP of the Virtual Service is translating to Public IP via NAT.  

Let’s assume the 4 web servers running across 2 SDDC are servicing the same web application and you are looking for doing a global load balancing along with local load balancing. Read the rest

Simplify Your Avi Load Balancer Deployment in VMC on AWS using EasyAvi

VMC on AWS is an easy way to consume VMware SDDC on the go. Spinning up infrastructure was never been so easy.

NSX-T is one of the critical pieces of the SDDC and equips customers to use core networking features such as

  • Routing/Switching (North-South & East-West).
  • Firewall (Gateway & Distributed).
  • VPN (Policy & Route Based)
  • Load Balancer (Edge Based)

Applications are becoming complex day by day. High availability and load balancing are a must for these complex applications.

Although NSX-T Edge based load balancer is pretty good, but it doesn’t offer the next generation load balancer features. There were competitors like F5 and Netscaler in the market who were providing advanced load balancing features with their products. VMware stepped into the next-gen load balancer arena via the acquisition of Avi Networks who were doing great work in this field. Avi Networks has been rebranded to NSX Advanced Load Balancer now. 

Avi Load Balancer (NSX ALB) integration with VMC on AWS is fully supported now. Read the rest

Load Balancing With Avi Load Balancer in VMC on AWS-Part 2

In the first post of this series, I discussed how Avi Controller & Service Engines are deployed in an SDDC running in VMC on AWS. 

In this post, I will walk through the steps of configuring a load balancer for web servers.

Lab Setup

The diagram is a pictorial representation of my lab setup.

Let’s jump into the lab and start configuring the load balancer. 

I have deployed a couple of web servers running on CentOS 7.

These are plain HTTP servers with a sample web page. 

Load Balancer Configuration

Create Session Persistence Profile

A persistence profile controls the settings that dictate how long a client will stay connected to one of the servers from a pool of load-balanced servers. Enabling a persistence profile ensures the client will reconnect to the same server every time, or at least for a desired duration of time. 

Cookie-based persistence is the most commonly used mechanism when dealing with web applications.Read the rest

Load Balancing With Avi Load Balancer in VMC on AWS-Part 1

Load balancers are an integral part of any data center, and most of the enterprise applications are clustered for high availability and load distribution. The choice of the load balancer becomes very critical when applications are distributed across data centers/clouds. 

This blog series is focused on demonstrating how we can leverage Avi Load Balancer for local/global load-balancing applications in VMC on AWS. 

If you are new to Avi Load Balancer, then I encourage you to learn about this product first. Here is the link to the official documentation for the Avi Load Balancer

Also, I have written a few articles around this topic, and you can read them from the links below:

1: Avi Load Balancer Architecture

2: Avi Controller Deployment & Configuration

3: Load Balancing Sample Application

The first two parts of this blog series are focused on the deployment & configuration of Avi LB in a single SDDC for local load balancing.Read the rest

vSphere with Tanzu Leveraging NSX ALB-Part-2: Deploy Supervisor Cluster

In the last post of this series, I discussed Avi Controller deployment and configuration. In this post, I will demonstrate supervisor cluster deployment which lays the foundation for the TKG clusters. 

Since I have explained the process of workload management in this post, I am not going to go through each step again. However, I want to discuss load balancer option.

  • When configuring Load Balancer, select type as Avi and punch in the IP address of the Avi controller VM followed by port 443. 
  • Specify the credentials that you configured at the time of the Avi controller deployment.
  • Grab the certificate from Avi controller and paste it here. 

Note: To grab the certificate thumbprint of the Avi Controller, navigate to Templates > Security > SSL/TLS Certificates and click on the arrow button in front of the self-signed certificate that you created for the controller VM.

Copy the contents of the certificate by clicking on the copy to clipboard button. Read the rest

vSphere with Tanzu Leveraging NSX ALB-Part-1: Avi Controller Deployment & Configuration

With the release of vSphere 7.0 U2, VMware introduced support of Avi Load Balancer (NSX Advanced Load Balancer) for vSphere with Tanzu, and thus, fully supported load balancing is now enabled for Kubernetes. Before vSphere 7.0 U2, HA Proxy was the only supported load balancer when vSphere with Tanzu needed to be deployed on vSphere Distributed Switch (vDS) based networking. 

HA Proxy was not supported for production-ready workloads due to its own limitations. NSX ALB is a next-generation load balancing solution, and its integration with vSphere with Tanzu enables customers to run production workloads in the Kubernetes cluster.

When vSphere with Tanzu is enabled using NSX ALB, the Controller VM has access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters, and the applications/services deployed in the TKG Cluster. 

The diagram below shows the high-level topology of NSX ALB & vSphere with Tanzu.

In this post, I will cover the steps of deploying & configuring NSX ALB for vSphere with Tanzu.Read the rest

Deploying Tanzu on VDS Networking-Part 3: Create Namespace & Deploy TKG Cluster

In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Let’s get started.

Create Namespace

A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters.  Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them. 

To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace. 

  • Select the cluster where the namespace will be created and provide a name for the namespace.  
Read the rest

Deploying Tanzu on VDS Networking-Part 2: Configure Workload Management

In the first post of this series, I talked about vSphere with Tanzu architecture and explained the deployment & configuration of the HA Proxy Appliance which acts as a load balancer for TKG Guest Clusters and Supervisor Cluster. 

In this post, I will walk through steps of enabling workload management which basically deploys & configure the supervisor cluster. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Note: Before enabling Workload Management, make sure you have vSphere with Tanzu license available with you. If you don’t have the license, you can register for this product to get a 60 days evaluation license.

Prerequisites for Enabling Workload management

  • A vSphere Cluster created and both DRS and HA are enabled on the cluster.
  • A vSphere Distributed Switch created and all ESXi hosts are added to the VDS.
Read the rest

Deploying Tanzu on VDS Networking-Part 1: HA Proxy Configuration

VMware Tanzu is the suite or portfolio of products and solutions that allow its customers to Build, Run, and Manage containerized applications (Kubernetes controlled). An earlier release of Tanzu was offered along with VCF leveraging NSX-T networking. Later VMware decoupled Tanzu from VCF and the solution is called vSphere with Tanzu and can be deployed directly on vSphere VDS without having NSX-T. 

vSphere with Tanzu on VDS has the following limitations: 

  • No support for vSphere Pod if NSX-T not used. 
  • No support for Harbor Image Registry.

When Tanzu (Workload Management) is enabled on vSphere, a supervisor cluster is deployed which consists of 3 nodes for high availability. So you need some kind of load balancing to distribute traffic coming to the supervisor cluster. VMware uses a customized version of HA Proxy as a software load balancer for supervisor nodes. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Networking Requirements for HA Proxy

Before I jump into the lab and walk through deployment/configuration steps, I want to discuss networking pre-requisites for HA proxy first. Read the rest

Nested ESXi Gotchas with VCF

Nested ESXi is a great way to quickly spin up a test/demo environment and fiddle around things in the lab. I have been doing so for quite a bit now. VCF is very dear to my heart and because VCF needs a hell lot of resources, I always test new versions/features in my nested lab.

Nested ESXi doesn’t always behave nicely and sometimes gives you a hard time and I encountered this recently in one of my VCF deployments. 

What was the problem and how it started?

The problem was with ESXi UUID and due to which vSAN configuration was failing. I will talk about more this later in this post. 

To save time, I created a nested ESXi template following this article. Deployed few ESXi hosts and everything was working fine. One day I tweaked my template to inject some advanced parameters and booted the template VM. This generated a new UUID entry for ESXi in /etc/vmware/esx.confRead the rest