vSphere with Tanzu Leveraging NSX ALB-Part-1: Avi Controller Deployment & Configuration

With the release of vSphere 7.0 U2, VMware introduced support of Avi Load Balancer (NSX Advanced Load Balancer) for vSphere with Tanzu, and thus fully supported load balancing is now enabled for Kubernetes. Prior to vSphere 7.0 U2, HA Proxy was the only supported load balancer when vSphere with Tanzu needed to be deployed on vSphere Distributed Switch (vDS) based networking. 

HA Proxy was not supported for production-ready workloads as it has its own limitations. NSX ALB is a next-generation load balancing solution and its integration with vSphere with Tanzu, enables customers to run production workloads in the Kubernetes cluster.

When vSphere with Tanzu is enabled leveraging NSX ALB, ALB Controller VM has access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters, and the applications/services that are deployed on top of TKG Cluster. 

The below diagram shows the high-level topology of NSX ALB & vSphere with Tanzu.

In this post, I will cover the steps of deploying & configuring NSX ALB for vSphere with Tanzu.Read More

Deploying Tanzu on VDS Networking-Part 3: Create Namespace & Deploy TKG Cluster

In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Let’s get started.

Create Namespace

A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters.  Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them. 

To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace. 

  • Select the cluster where the namespace will be created and provide a name for the namespace.  
Read More

Deploying Tanzu on VDS Networking-Part 2: Configure Workload Management

In the first post of this series, I talked about vSphere with Tanzu architecture and explained the deployment & configuration of the HA Proxy Appliance which acts as a load balancer for TKG Guest Clusters and Supervisor Cluster. 

In this post, I will walk through steps of enabling workload management which basically deploys & configure the supervisor cluster. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Note: Before enabling Workload Management, make sure you have vSphere with Tanzu license available with you. If you don’t have the license, you can register for this product to get a 60 days evaluation license.

Prerequisites for Enabling Workload management

  • A vSphere Cluster created and both DRS and HA are enabled on the cluster.
  • A vSphere Distributed Switch created and all ESXi hosts are added to the VDS.
Read More

Deploying Tanzu on VDS Networking-Part 1: HA Proxy Configuration

VMware Tanzu is the suite or portfolio of products and solutions that allow its customers to Build, Run, and Manage containerized applications (Kubernetes controlled). An earlier release of Tanzu was offered along with VCF leveraging NSX-T networking. Later VMware decoupled Tanzu from VCF and the solution is called vSphere with Tanzu and can be deployed directly on vSphere VDS without having NSX-T. 

vSphere with Tanzu on VDS has the following limitations: 

  • No support for vSphere Pod if NSX-T not used. 
  • No support for Harbor Image Registry.

When Tanzu (Workload Management) is enabled on vSphere, a supervisor cluster is deployed which consists of 3 nodes for high availability. So you need some kind of load balancing to distribute traffic coming to the supervisor cluster. VMware uses a customized version of HA Proxy as a software load balancer for supervisor nodes. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Networking Requirements for HA Proxy

Before I jump into the lab and walk through deployment/configuration steps, I want to discuss networking pre-requisites for HA proxy first. Read More

Nested ESXi Gotchas with VCF

Nested ESXi is a great way to quickly spin up a test/demo environment and fiddle around things in the lab. I have been doing so for quite a bit now. VCF is very dear to my heart and because VCF needs a hell lot of resources, I always test new versions/features in my nested lab.

Nested ESXi doesn’t always behave nicely and sometimes gives you a hard time and I encountered this recently in one of my VCF deployments. 

What was the problem and how it started?

The problem was with ESXi UUID and due to which vSAN configuration was failing. I will talk about more this later in this post. 

To save time, I created a nested ESXi template following this article. Deployed few ESXi hosts and everything was working fine. One day I tweaked my template to inject some advanced parameters and booted the template VM. This generated a new UUID entry for ESXi in /etc/vmware/esx.confRead More

NSX-T Routing With OSPF

Introduction

NSX-T 3.1.1 introduced support for OSPFv2 routing protocol for Tier-0 gateways. This feature was one of the most awaited features for some time. The introduction of OSPF to NSX-T solves one of the major hindrances that was stopping customers from migrating to NSX-T.

There are lots of customers who are still running NSX-V in their environment and OSPF as routing protocol used in their infrastructure. Now since NSX-T supports OSPF, customers can do a greenfield deployment of NSX-T and switch workloads from NSX-V to NSX-T using the L2 bridge and without much changes to their physical network.

Since this feature is pretty new, it will be interesting to see how soon customers adopt this in their environment. 

Disclaimer: This post is inspired by an original blog post written by  Peter Milchov

Before jumping into the lab, let’s revisit some important facts associated with OSPF support.

  • NSX-T 3.1.1 supports OSPFv2 only.
Read More

Whats New in HCX 4.0

VMware HCX 4.0 is all set to be released today. HCX 4.0 is a major release that introduces new functionality and enhancements.

In this post, I will be explaining these feature additions. The new feature sets can be widely categorized as follows:

  • Migration Enhancements
  • Network Extension Enhancements
  • Interconnect Enhancements

Let’s discuss this one by one.

Migration Enhancements

1: Migration Event Details: When VM migration is in progress, the HCX migration platform captures high-level status for the major phases like transfer, continuous replication, and switchover, and the status reported in the UI is always shown as “Transfer in Progress,”  “Switchover in Progress,” etc.

The actual details about “what state the migration is in right now” and “how long it has been in that state” are hidden from the end user. Also, the platform does not report the exact reason if the migration fails or why the migration is taking a long time to move to the next state.Read More

Layer 2 Bridging With NSX-T

In this post, I will be talking about the Layer 2 Bridging functionality of NSX-T and discuss use cases and design considerations when planning to implement this feature. Let’s get started.

NSX-T supports L2 bridging between Overlay logical segments and VLAN-backed networks. When workloads connected to the NSX-T overlay segment require L2 connectivity to either VLAN-connected workloads or need to reach a physical device (such as a physical GW, LB, or Firewall), NSX-T Layer 2 bridge can be leveraged. 

Use Cases of Layer 2 Bridging

A few of the use cases that come to the top of my mind are: 

1: Migrating workloads connected to VLAN port groups to NSX-T overlay segments: Customers who are looking for migrating workloads from legacy vSphere infrastructure to SDDC can leverage Layer 2 Bridging to seamlessly move their workloads.

When planning for migrations, some of the challenges associated with migrations are Re-IP of workloads, migrating firewall rules, NAT rules, etc.Read More

What’s New in VMware Cloud Foundation 4.2

VMware Cloud Foundation 4.2 will be out soon and like every other release, 4.2 is coming up with exciting new features. In this post, I will be explaining a few of those. So let’s get started. 

1: Static IP Pool for NSX-T TEPs: This one is probably one of the most awaited features of VMware Cloud Foundation. VCF 4.2 allows you to leverage static IP pools for NSX-T Host Overlay (TEP) networks as an alternative to DHCP. Now you no longer need to maintain additional infrastructure items (DHCP Server).  Both management domain and VI workload domains can now make use of static IPs.

In the VCF configuration workbook, you will now see an additional section where you can specify the IP range for Host TEP.

2: Release Versions UI: A new tab (Available Versions) has been added in the SDDC Manager UI which shows the information on the Bill Of Materials, new features, and end of general support dates for each available VCF release.Read More

Configure VCD in HCX via API

In my last post, I documented the HCX installation workflow for VCD-based clouds. In this post, I will show how to do the same via API. 

Once HCX Cloud Manager has been deployed and boots up, you can use the below API to integrate VCD into HCX. 

1: Import VCD Certificate

Response Output

2: Configure VCD

Read More