Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

NSX-T Federation-Part 4: Configure Stretched Networking

Welcome to the fourth part of the NSX Federation series. In the last post, I talked about configuring local and global NSX-T managers to enable federation. In this post, I will show how we can leverage to configure stretched networking across sites. 

If you have missed the earlier post of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

3: Configure Federation

NSX-T Federation Topology

Before diving into the lab, I want to do a quick recap of the lab topology that I will be building in this post.

The following components in my lab are already built out :

1: Cross Link Router: This router is responsible for facilitating communication between Site-A & Site-B SDDC/NSX.

  • Site-A ToR01/02 are forming BGP neighborship with Cross Link Router and advertising necessary subnets to enable inter-site communication.
  • Site-B ToR01/02 are also BGP peering with the Cross Link Router and advertising subnets. 
Read More

NSX-T Federation-Part 3: Configure Federation

Welcome to the third post of the NSX Federation series. In part 1 of this series, I discussed the architecture of the NSX-T federation, and part 2 was focussed on my lab walkthrough.

In this post, I will show how to configure federation in NSX-T.

If you have missed the earlier posts of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

Let’s get started.

Federation Prerequisites

Before attempting to deploy and configure federation, you have to ensure that the following prerequisites are in place:

  • There must be a latency of 150 ms or less between sites.
  • Global Manager supports only Policy Mode. The federation does not support Manager Mode.
  • The Global Manager and all Local Managers must have NSX-T 3.0 installed.
  • NSX T Edge Clusters at each site are configured with RTEP IPs.
  • Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs.
Read More

NSX-T Federation-Part 2: Lab Setup

In the first part of the NSX-T federation series, I discussed about architecture and components of the federation and also discussed some use cases. In this post, I will explain my lab topology before diving into the NSX-T Federation configuration.

I am trying to setup a federation between 2 sites in my lab and on both sites, I have already deployed the following:

  • vSphere 7 (ESXi & vCenter).
  • vSAN for shared storage.
  • NSX-T 3.0 Manager.
  • NSX-T 3.0 Edges.

I have 4 ESXi hosts on each site and each ESXi has 4 physical NICs. All 4 NICs are connected to a trunked port on ToR.

The networking architecture in my lab is a mix of VDS + N-VDS. 

2 NICs from each host are participating in routing regular datacenter traffic (Mgmt, vSAN & vMotion). 

The 2 other 2 NIC’s are connected to N-VDS and it carries overlay traffic only. 

For edge networking, I am using multi-tep, single NVD-S architecture. Read More

NSX-T Federation-Part 1: Introduction & Architecture

NSX-T federation is one of the new features introduced in NSX-T 3.0, and it allows you to manage multiple NSX-T Data Center environments with a single pane of glass. Federation allows you to stretch NSX-T deployments over multiple sites and/or towards the public cloud while keeping a single pane of management. Federation can be compared with the Cross-vCenter feature of NSX-V, where universal objects span more than one site.

NSX-T Federation Components/Architecture/Topology

With the NSX-T Federation, the concept of a Global Manager (GM) is introduced, which enables a single pane of glass for all connected NSX-T managers. A global manager is an NSX-T manager deployed with the role “Global”. Objects created from the Global Manager console are called global objects and pushed to the connected local NSX-T Managers.

The below diagram shows the high-level architecture of the NSX-T Federation

You can create networking constructs (T0/T1 gateways, segments, etc.) from Global Manager that can be stretched across one or more locations.Read More

VRF Lite Configuration Validation

In last post I covered the steps of configuring VRF gateways and attached Tier-1 gateway to VRF. In this post I am going to test my configuration to ensure things are working as expected. 

Following configuration was done in vSphere prior to VRF validation:

  • Tenant A VM is deployed and connected to segment ‘Tenant-A-App-LS’ and have IP 172.16.70.2
  • Tenant B VM is deployed and connected to segment ‘Tenant-B-App-LS’ and have IP 172.16.80.2

Connectivity Test

To test connectivity, I first picked Tenant-A vm and performed following tests:

A: Pinged default gateway and got ping result.

B: Pinged default gateway of Tenant-B segment and got the result.

C: Pinged Tenant-B VM and got result.

D: Pinged a server on physical network and got ping response.

Same set of tests I performed for Tenant-B VM and all test results passed.

Traceflow

Traceflow is another way of testing connectivity between vm’s. Below are my traceflow results for the 2 vm’s:

Here is the topology diagram created by NSX-T to show path taken by packet from Tenant-A-App01 vm to Tenant-B-App01 vm.Read More

Configuring VRF Lite in NSX-T 3.0

NSX-T provides true multi-tenancy capabilities to a SDDC/Cloud infrastructure and there are various ways of achieving it based on the use cases. In the simplest deployment architecture, multi-tenancy is achieved via connecting various Tier1 gateways to a Tier-0 gateway and each T1 gateway can belong to a dedicated tenant. Another way of implementing this is to have multiple T0 gateways, where each tenant will have their dedicated T0.

Things have changed with NSX-T 3.0. One of the newest feature that was introduced in NSX-T 3.0 was VRF (virtual routing and forwarding) gateway aka VRF Lite.

VRF Lite allows us to virtualize the routing table on a T0 and provide tenant separation from a routing perspective. With VRF Lite we are able to configure per tenant data plane isolation all the way up to the physical network without creating Tier0 gateway per tenant.

VRF Architecture

At a high level VRF architecture can be described as below:

We have a parent Tier0 gateway to which multiple VRF connects.Read More

Enable BGP on VRF Via API

Recently while working in my lab, I was trying to setup VRF Lite in NSX-T 3.0, I came across a bug which was preventing me to turn-on BGP on VRF gateway via UI. This bug has affected 3.0.1 and 3.0.1.1 versions of NSX-T. The error which I was getting when trying to enable BGP was:

“Only ECMP, enabled, BGP Aggregate to be configured for VRF BGP Config”

After researching for a bit, I figured out that currently there is no resolution of this issue and API is the only method via which BGP can be turned on on VRF gateway. Below are the steps of doing so.

Step 1: Find Tier-0 ID of the VRF Gateway

From above output, we can see VRF id is “TenantB-VRF”

Step 2: Find Locale Service ID of VRF Gateway

From above output, we can see Locale Service ID is “default”

Step 3:  Prepare json payload for BGP enablement

To find out syntax of json payload that we need to enable BGP on VRF, we can first grab the existing state of BGP

Step 4: Enable BGP on VRF

To enable BGP, we need to take complete API output from step-3 and change “enabled”: false to “enabled”: true and pass this output as payload of PATCH call.Read More

Marking User Created Transport Zone as Default TZ in NSX-T

For a freshly deployed NSX-T environment, you will find 2 default transport zones created:

  • nsx-overlay-transportzone
  • nsx-vlan-transportzone

These are system created TZ’s and thus they are marked as default. 

You can consume these TZ’s or can create a new one as per your infrastructure. Default TZ’s can’t be deleted.

Any newly created transport zone doesn’t shows up as default. Also when creating a new TZ via UI, we don’t get any option to enable this flag. As of now this is possible only via API. 

Creating a new Transport Zone with the “is_default” flag to true will work as intended, and the “is_default” flag will be removed from the system created TZ and the newly created TZ will be marked as the default one. 

Note: We can modify a TZ and mark it as default post creation as well. 

Let’s have a look at properties of system created transport zone.

To set a manually created TZ as default, we have to remove “is_default” flag from the system created TZ and then create a new TZ or modify existing TZ with this flag.Read More