NSX ALB Integration with VCD-Part 5: Load Balancing in Action

Welcome to the last post of this series. I am sure if you are following this blog series, then you have got yourself familiar with how NSX ALB integrates with VCD to provide “Load Balancing as a Service (LBaaS)”

In this post, I will demonstrate how tenants can leverage NSX ALB to create load balancer constructs (Virtual Services, Pools, etc)

If you haven’t read the previous posts in this series, I recommend you do so using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

3: Implementing Dedicated Service Engine Groups Design

4: Implementing Shared Service Engine Groups Design

Tenant vStellar has deployed a couple of servers that are connected to a routed network “Prod-GW” and have got IP addresses 192.168.40.5 and 192.168.40.6 respectively. 

Both servers are running an HTTP web server and are accessible via their local IP.

The tenant is looking for load balancing these web servers by leveraging NSX ALB.Read More

NSX ALB Integration with VCD-Part 4: Shared Service Engine Groups

Welcome to the 4th part of the NSX Advanced Load Balancer Integration with VMware Cloud Director series. The first post in this series covered Service Engine design topologies, while the second covered the processes for enabling “Load Balancing as a Service” in VCD. The deployment of the Dedicated Service Engine design was demonstrated in the third post.

This post will talk about the implementation of the Shared Service Engines design.

If you haven’t read the previous posts in this series, I recommend you do so using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

3: Implementing Dedicated Service Engine Groups Design

In Shared Service Engine Group design, tenant’s Edge Gateways can leverage a common Service Engine Group for the load balancer and virtual services placement. Since VCD tenants can have overlapping org networks implemented in their respective org’s, data traffic segregation is achieved by implementing VRF’s in NSX ALB.  Read More

NSX ALB Integration with VCD-Part 3: Dedicated Service Engine Groups

I discussed the supported design for NSX ALB integration with the VMware Cloud Director in the first post of this series. Part 2 of this series described how to enable “Load Balancing as a Service” in VCD. 

If you missed any of the previous posts in this series, I recommend that you read them using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

This blog post is focused on implementing the Dedicated Service Engine Groups design.

The below diagram shows the high-level overview of Dedicated SEG in VCD.

In this design, the management network of Service Engine (eth0) is attached to the tier-1 gateway dedicated for NSX ALB management and provisioned by the service provider. When a Virtual Service is created by the tenant, a logical segment corresponding to the VIP network is automatically created and gets attached to the tenant’s tier-1 gateway.Read More

NSX ALB Integration with VCD-Part 2: NSX ALB & Infra Configuration

In the first post of this series, I discussed the design patterns that are supported for NSX ALB integration with VCD.

In this post, I will share the steps of the NSX ALB & Infra configuration, before implementing the supported designs. 

Step 1: Configure NSX-T Constructs

1a: Deploy a couple of new Edge nodes to place the Tier-0 gateway that you will be creating for the NSX ALB consumption. 

Associate the newly deployed edge nodes with the existing Edge Cluster.

1b: Create a Tier-0 and configure BGP. Also, ensure that Tier-1 connected segments are allowed to be redistributed via BGP.

1c: Create a Tier-1 gateway and associate it with the Tier-o gateway that you created in the previous step.

Ensure that the tier-1 gateway is configured to redistribute connected routes to the tier-0 gateway. 

1d: Create a DHCP-enabled logical segment for the Service Engine management and connect the segment to the tier-1 gateway which you created in the previous step.Read More

NSX ALB Integration with VCD-Part 1: Design Patterns

Overview

NSX Advanced Load Balancer provides multi-cloud load balancing, web application firewall, application analytics, and container ingress services from the data center to the cloud. It is an Intent-based software load balancer that provides scalable application delivery across any infrastructure. NSX ALB provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environment.

With the release of VCD 10.2, NSX Advanced Load Balancer integration is available for use by the tenants. Service Provider configured NSX ALB and exposes load balancing functionality to the tenants so that tenants can deploy load balancers in a self-service fashion. 

The latest release of VCD (10.3.1) supports NSX ALB versions up to 21.1.2. Please check the VMware product interop matrix before planning your deployment.

In this blog post, I will be talking about the NSX ALB design patterns for VCD and the ALB integration steps with VCD.Read More

Tanzu Kubernetes Grid multi-cloud Integration with VCD – Greenfield Installation

With the release of Container Service Extension 3.0.3, service providers can integrate Tanzu Kubernetes Grid multi-cloud (TKGm) with VCD to offer Kubernetes as a Service to their tenants. TKGm integration in addition to existing support for Native K8 and vSphere with Tanzu (TKGS) has truly transformed VCD into a developer-ready cloud. 

With Tanzu Basic (TKGm &TKGS) on VCD, tenants have a choice of deploying K8s in three different ways: 

  • TKGS:  K8 deployment on vSphere 7 which requires vSphere Pod Service 
  • TKGm: Multi-tenant K8 deployments that do not need vSphere Pod Service. 
  • Native K8: Community supported Kubernetes on VCD with CSE 

By offering multi-tenant managed Kubernetes services with Tanzu Basic and VCD, Cloud providers can attract developer workloads starting with test/dev environments to their cloud. Once developers have grown confidence in the K8 solution, application owners can leverage the VCD-powered clouds to quickly deploy test/dev K8s clusters on-premise and accelerate their cloud-native app development and transition to production environments.Read More

Native Kubernetes in VCD using Container Service Extension 3.0

Introduction

VMware Container Service is an extension to Cloud Director which enables VCD cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE integration with VCD has allowed CSPs to provide true developer-ready cloud offering to VCD tenants. Tenants can quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps. Once the K8 cluster is available, developers can use their native Kubernetes tooling to interact with the cluster.

To know more about the architecture and interaction of CSE components, please see my previous blog on this topic.

Container Service Extension 3.x went GA earlier this year and brought several new features/enhancements and one of them is supporting Tanzu Kubernetes Grid multi-cloud (TKGm) for K8 deployments and thus unlocking the full potential of consistent upstream Kubernetes in their VCD powered clouds.Read More

Upgrade Tanzu Kubernetes Grid from v1.3.x to 1.4.x

Tanzu KubernetestaTanzu Kubernetes Grid 1.4 is all set to be released today. A lot of new features are coming up with this release including (not limited to):

  • K8’s versions including 1.21.2, 1.20.8, and 1.19.12 are supported with the 1.4 release.
  • Support for NSX Advanced Load Balancer versions 20.1.3 & 20.1.6. 
  • New vSphere configuration variables “VSPHERE_REGION” and “VSPHERE_ZONE” are added to enable CSI storage for workload clusters in vSphere environments with multiple data centers or clusters.
  • Supports L7 ingress (using NSX ALB) for workload clusters.
  • AKO deployment is fully automated. No need to install it via helm or using custom yaml for deployment as both AKO and AKO Operator are provided as core packages. 

It’s a good time to upgrade TKG in the lab and through this blog post, I will walk through the steps of upgrading TKGm from v1.3 to v1.4. 

Upgrade Procedure

Step 1: Download and install the new version of the Tanzu CLI and Kubectl on the bootstrapper machine. Read More

Tanzu Kubernetes Grid Ingress With NSX Advanced Load Balancer

NSX ALB delivers scalable, enterprise-class container ingress for containerized workloads running in Kubernetes clusters. The biggest advantage of using NSX ALB in a Kubernetes environment is that it is agnostic to the underlying Kubernetes cluster implementations. The NSX ALB controller integrates with the Kubernetes ecosystem via REST API and thus can be used for ingress & L4-L7 load balancing solution for a wide variety of Kubernetes implementation including VMware Tanzu Kubernetes Grid.

NSX ALB provides ingress and load balancing functionality for TKG using AKO which is a Kubernetes operator that runs as a pod in the Tanzu Kubernetes clusters and translates the required Kubernetes objects to Avi objects and automates the implementation of ingresses/routes/services on the Service Engines (SE) via the NSX ALB Controller.

The diagram below shows a high-level architecture of AKO interaction with NSX ALB.

AKO interacts with the Controller & Service Engines via API to automate the provisioning of Virtual Service/VIP etc.Read More

Quick Tip: Disable vSAN Precheck During Workload Domain Upgrade in VCF

Before an upgrade bundle can be applied to a workload domain (Mgmt or VI), the SDDC manager trigger a precheck on the domain to identify and alert if there is an underlying issue, so that the issue can be remediated before applying the upgrade bundle. In lab environments, one of the common precheck failures is regarding the vSAN HCL compatibility. 

In lab environments, you might be running VCF on unsupported hardware that is not present in the vSAN HCL

During upgrade precheck on the workload domain, you will see the vSAN HCL status as Red, and SDDC Manager won’t let you upgrade the domain until the issue is fixed. 

You can force SDDC Manager to ignore the vSAN precheck by adding the following lines in the applications-prod.properties file and modifying the below entries. The file is located in the directory “/opt/vmware/vcf/lcm/lcm-app/conf”

Change the vsan health check related data from true to false. Read More