NSX 4.2 Multitenancy Series – Part 10: Integration with NSX Advanced Load Balancer

Welcome to part-10 of the NSX Multi-tenancy series. The last post of this series discussed distributed security in NSX VPC and how to implement gateway and distributed firewall policies. 

This post discusses NSX VPC integration with NSX Advanced Load Balancer (ALB). 

If you are not following along, I encourage you to read the earlier parts of this series from the below links:

1: NSX Multi-tenancy Introduction

2: Multi-tenancy Design Models

3: Creating NSX Projects

4: Distributed Security in NSX Project

5: NSX Virtual Private Cloud Overview

6: NSX VPC Networking

7: Creating NSX VPCs

8: Resource Sharing in NSX VPC

9: NSX VPC Security

The integration between NSX VPC and NSX Advanced Load Balancer allows application owners to provision load balancers on-demand in a self-service manner. To support NSX multi-tenancy, a new configuration option, “Enable VPC Mode” is introduced in the NSX cloud configuration in NSX ALB. The Enterprise Admin, with the role of System Admin in NSX ALB, configures the NSX cloud configuration with VPC mode and the Service Engine’s networks.

Current Environment

Before jumping into the NSX ALB configuration, let’s review the current state of the environment. 

A single provider Tier-0 gateway is deployed in Active-Active mode and is shared by all projects/VPCs. The edge cluster on which the tier-0r gateway is instantiated is also shared across all projects. 

A dedicated tier-1 gateway in the distributed mode is created in the Default space, and route advertisement is configured as shown below.

Logical Segments for ALB Service Engine management and data networks are connected to the tier-1 gateway and are not leveraging DHCP for IP address assignments.

Note: DHCP can be configured on the segments if you require it. I want to use IP pools in the ALB configuration and thus not use DHCP. 

There is only one VPC for the engineering project, “Eng-Dev.” The project workloads connect to the VPC’s A/S Tier-1 gateway, which uses the shared edge cluster provided by the Enterprise Admin.

NSX ALB Configuration

I am not covering the details of the NSX ALB deployment and initial configuration as I have written articles previously on this topic. You can read the article here.

A single node NSX ALB controller cluster (v30.2.2) is deployed for the VPC integration. The controller node is deployed on the infrastructure management network. 

Note: As of today, there is a 1:1 mapping between the NSX ALB and the NSX manager in the VPC mode. 

vCenter and NSX credentials are created in advance for the integration with NSX. 

NSX ALB Cloud Configuration

To integrate the NSX ALB controller with NSX, you create and configure NSX Cloud Connector. I am covering very high-level steps here as I have previously blogged on this topic. You can read the article here

Login to the NSX ALB controller and navigate to Infrastructure > Clouds > Create and select NSX Cloud.

  • Specify the NSX Cloud name.
  • Ensure that DHCP is selected. This will facilitate automatic IP allocation for the auto-plumbed VPC data networks. 
  • Specify the object name prefix. The Avi controller is going to create a few objects in NSX using this prefix.

Note: Hyphens are not supported in prefix names. 

Provide the NSX manager IP address/FQDN and select the NSX credentials created previously.

  • Configure the management network for the ALB Service Engine (SE). Select the overlay transport zone where you created the segments for ALB SE management. 

Note: For NSX multi-tenancy, only the default overlay transport zone is supported, and the ALB SE management network should be created in this transport zone only. 

  • Select the tier-1 gateway that you created for ALB and the SE management network.
  • Select the checkbox “Enable VPC Mode” to integrate NSX ALB with NSX VPC.

To provide load balancing for the NSX default space, you create a data network in the default transport zone and attach the network to the tier-1 gateway created for NSX ALB.
Select the tier-1 gateway and the overlay segment dedicated to the data network.

Provide the vCenter details where the SEs will be deployed. This vCenter is configured as a compute manager in NSX. Also, specify the content library where the SE images can be pushed.

Note: The content library should exist before the cloud connector configuration.

Leave the IPAM and DNS details empty. In the VPC mode, NSX provisions the subnet, IPAM, and DHCP automatically to simplify the overall consumption and configuration.

Save the cloud configuration. 

The status of the NSX cloud changes to green in some time.

Configure Networks for IPAM

Since I am not using DHCP for ALB SE management and data networks, I need to create IP pools for these networks. If you are using DHCP on the logical segments, you can just configure the subnet prefix; IP pools are not required.

Navigate to Infrastructure > Network and select the NSX Cloud from the drop-down menu of cloud selection.

Click on the pencil button to edit the networks. 

Subnet and IP pool configuration for ALB SE management network.

Subnet and IP pool configuration for ALB SE data network.

Both networks are now configured with IP pools.

After the NSX cloud connector is configured, the tier-1 gateway where the ALB data network is connected is mapped as a dedicated VRF context.

To enable Virtual Service VIP reachability, create a default route in the SE data VRF context pointing the next hop to the data network’s gateway.

Create Service Engine Group

Service Engine Group hosts the Service Engine VMs and defines the configuration of service engines, such as HA mode, sizing, placement scope, scale, reservations, etc.

Select the NSX Cloud from the cloud selection menu and click Create. 

Configure the Service Engine Group as per below. 

  • HA Mode: Active-Active
  • Virtual Service Placement: Compact

Select the vCenter server configured in the NSX Cloud configuration and choose the cluster/datastore where the SE VMs will be deployed.

Configure IPAM (Optional)

Configuring IPAM is optional, and you configure this only if you want to provide automatic IP addresses to the VIPs configured for the apps deployed in the default space in NSX. For VPC networks, NSX automatically plumbs data networks in IPAM and manages IP address assignment.

Configuring DNS profiles is also optional. 

I have configured both because I want to leverage NSX ALB for applications deployed in the default space. 

After creating the IPAM & DNS profiles, edit the NSX Cloud and associate IPAM and DNS with the cloud connector.

Enable NSX ALB on VPC

When VPC mode is enabled in the NSX Cloud connector, NSX ALB will start discovering all the VPCs in NSX projects that have the ALB flag set. The data networks in the cloud connector will then be dynamically updated by NSX.

To enable the ALB flag on VPC, login to the NSX manager as Enterprise Admin, navigate to the VPC (in the project’s space), edit the VPC, and turn on the flag “Enable NSX Advanced Load Balancer.”

A new private subnet “_AVI_SUBNET-LB” is auto-created in the VPC. This segment will be used as the data segment for the project’s workload and is configured on the Service Engines automatically. This segment is also attached to the VPC’s tier-1 gateway.

For each NSX project that is discovered by the NSX cloud connector, a tenant is created in NSX ALB.

For each VPC with the “Enable NSX ALB” flag set, a dedicated tenant-scoped VRF context is created in NSX ALB.

The VPCs that are mapped as VRF contexts are scoped under the tenant, and hence the flag for “Tenant VRF” is enabled by default.

VPC’s auto-created segment is discovered by NSX ALB and is mapped to the VPC’s VRF context.

Configure RBAC for Tenant (VPC)

Typically, in a production environment, NSX ALB is configured for LDAP, and users are automatically discovered and mapped via Auth Profiles.

I am using local authentication in NSX ALB, so I am creating Tenant users manually. To create a new local user, navigate to the Administration > Users tab.

Map the user to the project’s tenant and assign the role “Tenant-Admin”. This user is equivalent to the Project Admin in NSX. 

Add another user for the project’s tenant with the role “Application Admin”. This user is equivalent to the VPC Admin in NSX.

Create Virtual Service for VPC

Login to NSX ALB as an application admin and create a new virtual service in the NSX Cloud.

 

Change the VRF context to the project’s tenant.

Make sure to select the “Enabled” and “Traffic Enabled” check-boxes for the virtual service.

Select the Service Engine Group that you created for this tenant. 

Note: You can create a SEG for each tenant in their respective tenancy after it’s discovered by NSX ALB.

At this moment, the VPC’s auto-created data network is dynamically updated in the NSX cloud connector along with the VPC tier-1 gateway.

VPC’s VIP Placement

When you create a VIP for the virtual service in a VPC, you have two options for the VIP placement:

  • On the private subnet (this is not routable externally)
  • On the public subnet (this is routable externally)

Selecting the “PRIVATE” option creates an L2 VIP with an IP from the VPC’s private network (that is not routable). Selecting the “PUBLIC” option creates an L3 VIP with an IP from the VPC’s external network (that is routable).

When a Virtual Service is created, NSX ALB programs a static route in the VPC to perform southbound ECMP for the VIPs. 

The next hop for the static route points to the IP address of the data interface of the service engines.

SE data interface IPs

And that’s it for this post.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

Leave a ReplyCancel reply