Deploying Tanzu on VDS Networking-Part 1: HA Proxy Configuration

VMware Tanzu is the suite or portfolio of products and solutions that allow its customers to Build, Run, and Manage containerized applications (Kubernetes controlled). An earlier release of Tanzu was offered along with VCF leveraging NSX-T networking. Later VMware decoupled Tanzu from VCF and the solution is called vSphere with Tanzu and can be deployed directly on vSphere VDS without having NSX-T. 

vSphere with Tanzu on VDS has the following limitations: 

  • No support for vSphere Pod if NSX-T not used. 
  • No support for Harbor Image Registry.

When Tanzu (Workload Management) is enabled on vSphere, a supervisor cluster is deployed which consists of 3 nodes for high availability. So you need some kind of load balancing to distribute traffic coming to the supervisor cluster. VMware uses a customized version of HA Proxy as a software load balancer for supervisor nodes. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Networking Requirements for HA Proxy

Before I jump into the lab and walk through deployment/configuration steps, I want to discuss networking pre-requisites for HA proxy first. 

You need 3 routable subnets in your environment for the following purpose:

  • One subnet will be for Management Networking. This is where vCenter, ESXi, the Supervisor Cluster, and the Load Balancer will live.
  • The other subnet will be used for Workload Networking. This is where your TKG guest clusters will live.
  • The third subnet will be used for Frontend Networking which serves the Virtual IP range for Kubernetes clusters. 

Note: For Lab/POC environments, workload and frontend subnet can be merged as one, but make sure the IP address range doesn’t overlap between VIPs and IP range for TKG guest clusters. This is explained in great detail here

The below diagram taken from VMware official Blog, explains HA proxy network architecture. 

Here is how my lab topology looks like. 

I am using the following VLANs/Subnets in my lab

HA Proxy deployment & Configuration

HA Proxy is available in ova format and the latest and greatest version can be downloaded from the VMware Git Repository

OVA deployment is pretty simple like any other VMware appliance so I am not covering all the steps. I want to discuss few important steps though.

On the configuration page, choose whether HA proxy will be deployed with 2 legs or three.

  • If you are using the same subnet for TKG Workload Network and HA Proxy VIP, Select mode as “Default“. 
  • If you have separate subnets for workload network and frontend network then choose the deployment type as “Frontend Network“.

On the Select Networks page, map the networks to the correct port group. If HA Proxy is deployed in default mode, Frontend and Workload network can map to the same port group. 

Under network config, IP address for management, Workload, and Frontend networks is specified in CIDR format. 

LB configuration is very crucial. Although you might have a /24 network available, you will assign a subset of IPs for LB VIPs. Make sure the IP range which you specify here, should not contain the IP address of HA Proxy frontend static IP which we configured on the previous page. 

Let’s understand this with an example. 

To calculate the number of usable IP in a given CIDR, I use this tool

If I specify 172.18.90.100/28 as the IP range in CIDR format, the usable IP range which I will get is between 172.18.90.96-172.18.90.111, and the frontend static IP of HA proxy is outside this range. 

Now if I change the CIDR to 172.18.90.100/25 or 172.18.90.50/28, my static IP will fall in the range of IPs that HA proxy can use for load balancing and that will cause trouble later. 

And that’s it for this post. In part 2 of this series, I will cover steps of enabling Workload management. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a Reply