NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Getting Started With NSX ALB: Part-4-Load Balancer in Action

In the last post of this series, I completed NSX-T integration with NSX ALB. Now it’s time to test the load balancer. 

If you have missed the earlier post of this series, you can read them from the below links:

1: NSX ALB Introduction & Architecture

2: Avi Controller Deployment & Configuration

3: NSX ALB Integration With NSX-T

Let’s get started.

Before I dive into the lab, let me first explain the topology that I have implemented in my lab.

  • I have two of my web servers sitting on the Web-LS logical segment backed by subnet 192.40.40.0/24.
  • Logical segments ALB-Mgmt-LS and ALB-Data-LS are connected to the same Tier-1 gateway to which Web-LS segment is connected and are backed by subnets 192.20.20.0/24 and 192.30.30.0/24.
  • Avi Service Engine VM’s are connected to both Mgmt & Data LS
  • All 3 segments are created in the overlay transport zone. 
  • My Tier-0 gateway is BGP peering with a physical router and my Win JB machine is able to ping the logical segments default gateway. 
Read More

Getting Started With NSX ALB: Part-3-NSX-T Integration

In the previous post of this series, I discussed Avi controller deployment and basic configuration. It’s time to integrate NSX-T with NSX ALB. High-level steps of NSX-T integration can be summarized as below:

  • Create a Content Library in vCenter
  • Deploy a Tier-1 gateway for Avi Management.
  • Create Logical Segments in NSX-T for Avi SE VM’s.
  • Create credentials for NSX-T and vCenter in Avi.
  • Register NSX-T with Avi Controller.
  • Create an IPAM profile. 

Let’s get started.

Create a Content Library in vCenter

Deployment of Avi Service Engine VM’s is done automatically by Avi Controller when we create Virtual Service. For this to work, an empty content library needs to be created in vCenter server as the controller pushes Avi SE ova into the content library and then deploys the SE VM’s. 

Deploy Tier-1 gateway for Avi Management

You can use the existing Tier-1 gateway or can deploy a new one (dedicated) for Avi management.Read More

Getting Started With NSX ALB: Part-2- Avi Controller Deployment & Configuration

The first post of this series talked about NSX ALB ad its architecture. Also, I discussed features that make NSX ALB unique. In this post, I will discuss deployment and basic configuration and later I will discuss ALB integration with NSX-T.

The official process of NSX ALB deployment is documented Here & cluster creation process Here

Hardware requirements for Avi Controllers and Service Engine VM’s are documented here

Prerequisites for NSX ALB Deployment:

  • vSphere is deployed and configured.
  • NSX-T Manager is deployed and is integrated with vCenter.T0-GW is deployed, and is paired with the physical Network using BGP.

NSX-ALB Controller OVA can be downloaded from https://customerportal.avinetworks.com

Ova deployment is straight forward and I am not covering the deployment wizard. Make sure to leave the Sysadmin login authentication key field blank when deploying controller ova. 

Once Controller VM boots up, connect to the web interface of the controller by typing https://<Avi-Controller-ip>/

Configure the controller administrator account by setting up password and email id (for password reset in case of account lockout)

Configure DNS and NTP server information.Read More

Getting Started With NSX ALB: Part-1- Introduction & Architecture

NSX Advanced Load Balancer (Formerly Avi Vantage) is a multi-cloud Software Defined Load Balancer which provides scalable application delivery across any infrastructure. NSX ALB is 100% software-defined and provides:

  • Multi-cloud: Consistent experience across on-premises and cloud environments through central management and orchestration.
  • Intelligence: Built-in analytics drive actionable insights that make autoscaling seamless, automation intelligent and decision making easy.
  • Automation: 100% RESTful APIs enable self-service provisioning and integration into the CI/CD pipeline for application delivery.

Note: NSX ALB solution came through VMware acquisition of Avi Networks in 2019.

Some of the key features of NSX ALB are:

  • Autoscaling of Load Balancers and Applications.
  • Web Application Analytics & Performance Insights.
  • Automation for IT, Self-Service for Developers.

To know more about these features, please visit Avi Networks website. 

NSX ALB Architecture

NSX-ALB Consists of two main components,

  • Avi Controller.
  • Service Engines (SE).

Controllers are deployed by platform administrator and Service Engines are automatically deployed by the controller when we create Virtual Services.Read More

Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

NSX-T Federation-Part 4: Configure Stretched Networking

Welcome to the fourth part of the NSX Federation series. In the last post, I talked about configuring local and global NSX-T managers to enable federation. In this post, I will show how we can leverage to configure stretched networking across sites. 

If you have missed the earlier post of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

3: Configure Federation

NSX-T Federation Topology

Before diving into the lab, I want to do a quick recap of the lab topology that I will be building in this post.

The following components in my lab are already built out :

1: Cross Link Router: This router is responsible for facilitating communication between Site-A & Site-B SDDC/NSX.

  • Site-A ToR01/02 are forming BGP neighborship with Cross Link Router and advertising necessary subnets to enable inter-site communication.
  • Site-B ToR01/02 are also BGP peering with the Cross Link Router and advertising subnets. 
Read More

NSX-T Federation-Part 3: Configure Federation

Welcome to the third post of the NSX Federation series. In part 1 of this series, I discussed the architecture of the NSX-T federation, and part 2 was focussed on my lab walkthrough.

In this post, I will show how to configure federation in NSX-T.

If you have missed the earlier posts of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

Let’s get started.

Federation Prerequisites

Before attempting to deploy and configure federation, you have to ensure that the following prerequisites are in place:

  • There must be a latency of 150 ms or less between sites.
  • Global Manager supports only Policy Mode. The federation does not support Manager Mode.
  • The Global Manager and all Local Managers must have NSX-T 3.0 installed.
  • NSX T Edge Clusters at each site are configured with RTEP IPs.
  • Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs.
Read More

NSX-T Federation-Part 2: Lab Setup

In the first part of the NSX-T federation series, I discussed about architecture and components of the federation and also discussed some use cases. In this post, I will explain my lab topology before diving into the NSX-T Federation configuration.

I am trying to setup a federation between 2 sites in my lab and on both sites, I have already deployed the following:

  • vSphere 7 (ESXi & vCenter).
  • vSAN for shared storage.
  • NSX-T 3.0 Manager.
  • NSX-T 3.0 Edges.

I have 4 ESXi hosts on each site and each ESXi has 4 physical NICs. All 4 NICs are connected to a trunked port on ToR.

The networking architecture in my lab is a mix of VDS + N-VDS. 

2 NICs from each host are participating in routing regular datacenter traffic (Mgmt, vSAN & vMotion). 

The 2 other 2 NIC’s are connected to N-VDS and it carries overlay traffic only. 

For edge networking, I am using multi-tep, single NVD-S architecture. Read More