Quick Tip: How to Reset NSX ALB Controller for a Fresh Configuration

Sometimes NSX ALB controllers are frequently redeployed in the lab environment to test and retest setup. Redeploying an NSX ALB controller only takes a few minutes, but in a slow environment, it can take up to 20-25 minutes. Using this handy tip, you can save some quality time.

To reset a controller node to the default settings, login to the node over SSH and run the following command.

Read the rest

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read the rest

Container Service Extension 4.0 on VCD 10.x – Part 2: NSX Advanced Load Balancer Configuration

In part 1 of this blog series, I discussed Container Service Extension 4.0 platform architecture and a high-level overview of a production-grade deployment. This blog post is focused on configuring NSX Advanced Load Balancer and integrating it with VCD. 

I will not go through each and every step of the deployment & configuration as I have already written an article on the same topic in the past. I will discuss the configuration steps that I took to deploy the topology that is shown below.

Let me quickly go over the NSX-T networking setup before getting into the NSX ALB configuration.

I have deployed a new edge cluster on a dedicated vSphere cluster for traffic separation. This edge cluster resides in my compute/workload domain. The NSX-T manager managing the edges is deployed in my management domain. 

On the left side of the architecture, you can see I have one Tier-0 gateway and VRFs carved out for NSX ALB and CSE networking.Read the rest

Replacing NSX ALB Certificates with Signed Certificates

In this post, I will walk through the steps of replacing NSX ALB self-signed certificates with a CA-signed certificate. For the purpose of this demonstration, I am using Active Directory Certificate Service in my lab. I have a windows server 2019 deployed and additional roles configured for AD integrated Certificate Service. 

Please follow the below procedure for replacing NSX ALB certificates.

Step 1: Generate Certificate Signing Request (CSR)

CSR includes information such as domain name, organization name, locality, and country. The request also contains the public key/private key, which will be associated with the certificate generated. A CSR can be generated directly from the NSX ALB portal, but that requires configuring a Certificate Management Profile or using the OpenSSL utility.

To generate a CSR via the NSX ALB portal, go to Templates > Security > SSL/TLS Certificates and click on the Create button, then select controller certificate from the drop-down menu.Read the rest

NSX ALB Integration with VCD-Part 2: NSX ALB & Infra Configuration

In the first post of this series, I discussed the design patterns that are supported for NSX ALB integration with VCD.

In this post, I will share the steps of the NSX ALB & Infra configuration before implementing the supported designs. 

Step 1: Configure NSX-T Constructs

1a: Deploy a couple of new Edge nodes to place the Tier-0 gateway that you will be creating for the NSX ALB consumption. 

Associate the newly deployed edge nodes with the existing Edge Cluster.

1b: Create a Tier-0 and configure BGP. Also, ensure that Tier-1 connected segments are allowed to be redistributed via BGP.

1c: Create a Tier-1 gateway and associate it with the Tier-0 gateway that you created in the previous step.

Ensure that the tier-1 gateway is configured to redistribute connected routes to the tier-0 gateway. 

1d: Create a DHCP-enabled logical segment for the Service Engine management and connect the segment to the tier-1 gateway that you created in the previous step.Read the rest

NSX ALB Integration with VCD-Part 1: Design Patterns

Overview

NSX Advanced Load Balancer provides multi-cloud load balancing, web application firewall, application analytics, and container ingress services from the data center to the cloud. It is an intent-based software load balancer that provides scalable application delivery across any infrastructure. NSX ALB provides 100% software load balancing to ensure a fast, scalable, and secure application experience. It delivers elasticity and intelligence across any environment.

With the release of VCD 10.2, NSX Advanced Load Balancer integration is available for use by the tenants. The service provider configures NSX ALB and exposes load-balancing functionality to the tenants so that tenants can deploy load balancers in a self-service fashion. 

The latest release of VCD (10.3.1) supports NSX ALB versions up to 21.1.2. Please check the VMware product interop matrix before planning your deployment.

In this blog post, I will be talking about the NSX ALB design patterns for VCD and the ALB integration steps with VCD.Read the rest

NSX ALB Upgrade Breaking AKO Integration

Recently I upgraded NSX ALB from 20.1.4 to 20.1.5 in my lab and observed weird things whenever I attempted to deploy/delete any Kubernetes workload of type LoadBalancer.

The Issue

On deploying a new K8 application, AKO was unable to create a load balancer for the application. In NSX ALB UI, I can see that a pool has been created and a VIP assigned but no VS is present. I have also verified that the ‘ako-essential’ role has the necessary permission “PERMISSION_VIRTUALSERIVCE”  to create any new VS.

On attempting to delete a K8 application, the application got deleted from the TKG side, but it left lingering items (VS, Pools, etc) in the ALB UI. To investigate more on the issue, I manually tried deleting the server pool and captured the output using the browser network inspect option. 

As expected the delete operation failed with the error that the object that you are trying to delete is associated with ‘L4PolicySet’

But the l4policyset was empty

Read the rest

Quick Tip – Restricting SSH Access to NSX ALB Service Engines

By default, the user can connect directly to a service engine via SSH using the system’s admin credentials. If there is a security requirement to restrict SSH connections, it is possible to disable this access using the following CLI configuration:

1: Connect to the NSX ALB controller and gain shell access

admin@172-19-10-51:~$ shell
Login: admin
Password:

[admin:172-19-10-51]: >

2: Run the following commands to disable admin SSH access to the service engine.

[admin:172-19-10-51]: > configure serviceengineproperties

[admin:172-19-10-51]: seproperties> se_runtime_properties

[admin:172-19-10-51]: seproperties:se_runtime_properties> no admin_ssh_enabled

[admin:172-19-10-51]: seproperties:se_runtime_properties> save

[admin:172-19-10-51]: seproperties> save

With direct SSH access to Service Engines disabled, it is still possible for a user with the “Super User” privilege to remotely access a Service Engine’s shell through a secure tunnel from the Controller using the attach serviceengine command from the Avi controller CLI.

This will automatically log in to the Service Engine using an internal user account (avidebuguser) via PKI authentication—the Super User does not need to know the default admin account credentials.Read the rest

Global Load Balancing using NSX ALB in VMC

Overview

Global Server Load Balancing (GSLB) is the method of load balancing applications/workloads that are distributed globally (typically, multiple data centers and public clouds). GSLB enables the efficient distribution of traffic across geographically dispersed application servers. 

In a production environment, the corporate name server delegates one or more subdomains to Avi Load Balancer GSLB, which then owns these domains and responds to DNS queries from clients. DNS-based load balancing is implemented by creating a DNS Virtual Service. 

How GSLB Works?

Let’s understand the working of GSLB using the example. 

2 SDDCs are running in VMC, and both SDDC has local load balancing configured to load balance set of web servers in their respective SDDC. The 2 Virtual Services (SDDC01-Web-VS & SDDC02-Web-VS) have a couple of web servers as pool members and the VIP of the Virtual Service is translating to Public IP via NAT.  

Let’s assume the 4 web servers running across 2 SDDC are servicing the same web application and you are looking for doing a global load balancing along with local load balancing. Read the rest

Load Balancing With Avi Load Balancer in VMC on AWS-Part 2

In the first post of this series, I discussed how Avi Controller & Service Engines are deployed in an SDDC running in VMC on AWS. 

In this post, I will walk through the steps of configuring a load balancer for web servers.

Lab Setup

The diagram is a pictorial representation of my lab setup.

Let’s jump into the lab and start configuring the load balancer. 

I have deployed a couple of web servers running on CentOS 7.

These are plain HTTP servers with a sample web page. 

Load Balancer Configuration

Create Session Persistence Profile

A persistence profile controls the settings that dictate how long a client will stay connected to one of the servers from a pool of load-balanced servers. Enabling a persistence profile ensures the client will reconnect to the same server every time, or at least for a desired duration of time. 

Cookie-based persistence is the most commonly used mechanism when dealing with web applications.Read the rest