In the first part of the NSX-T federation series, I discussed about architecture and components of the federation and also discussed some use cases. In this post, I will explain my lab topology before diving into the NSX-T Federation configuration.
I am trying to setup a federation between 2 sites in my lab and on both sites, I have already deployed the following:
vSphere 7 (ESXi & vCenter).
vSAN for shared storage.
NSX-T 3.0 Manager.
NSX-T 3.0 Edges.
I have 4 ESXi hosts on each site and each ESXi has 4 physical NICs. All 4 NICs are connected to a trunked port on ToR.
The networking architecture in my lab is a mix of VDS + N-VDS.
2 NICs from each host are participating in routing regular datacenter traffic (Mgmt, vSAN & vMotion).
The 2 other 2 NIC’s are connected to N-VDS and it carries overlay traffic only.
For edge networking, I am using multi-tep, single NVD-S architecture. … Read More
NSX-T federation is one of the new features introduced in NSX-T 3.0, and it allows you to manage multiple NSX-T Data Center environments with a single pane of glass. Federation allows you to stretch NSX-T deployments over multiple sites and/or towards the public cloud while keeping a single pane of management. Federation can be compared with the Cross-vCenter feature of NSX-V, where universal objects span more than one site.
NSX-T Federation Components/Architecture/Topology
With the NSX-T Federation, the concept of a Global Manager (GM) is introduced, which enables a single pane of glass for all connected NSX-T managers. A global manager is an NSX-T manager deployed with the role “Global”. Objects created from the Global Manager console are called global objects and pushed to the connected local NSX-T Managers.
The below diagram shows the high-level architecture of the NSX-T Federation
You can create networking constructs (T0/T1 gateways, segments, etc.) from Global Manager that can be stretched across one or more locations.… Read More
In last post I covered the steps of configuring VRF gateways and attached Tier-1 gateway to VRF. In this post I am going to test my configuration to ensure things are working as expected.
Following configuration was done in vSphere prior to VRF validation:
Tenant A VM is deployed and connected to segment ‘Tenant-A-App-LS’ and have IP 172.16.70.2
Tenant B VM is deployed and connected to segment ‘Tenant-B-App-LS’ and have IP 172.16.80.2
Connectivity Test
To test connectivity, I first picked Tenant-A vm and performed following tests:
A: Pinged default gateway and got ping result.
B: Pinged default gateway of Tenant-B segment and got the result.
C: Pinged Tenant-B VM and got result.
D: Pinged a server on physical network and got ping response.
Same set of tests I performed for Tenant-B VM and all test results passed.
Traceflow
Traceflow is another way of testing connectivity between vm’s. Below are my traceflow results for the 2 vm’s:
Here is the topology diagram created by NSX-T to show path taken by packet from Tenant-A-App01 vm to Tenant-B-App01 vm.… Read More
NSX-T provides true multi-tenancy capabilities to a SDDC/Cloud infrastructure and there are various ways of achieving it based on the use cases. In the simplest deployment architecture, multi-tenancy is achieved via connecting various Tier1 gateways to a Tier-0 gateway and each T1 gateway can belong to a dedicated tenant. Another way of implementing this is to have multiple T0 gateways, where each tenant will have their dedicated T0.
Things have changed with NSX-T 3.0. One of the newest feature that was introduced in NSX-T 3.0 was VRF (virtual routing and forwarding) gateway aka VRF Lite.
VRF Lite allows us to virtualize the routing table on a T0 and provide tenant separation from a routing perspective. With VRF Lite we are able to configure per tenant data plane isolation all the way up to the physical network without creating Tier0 gateway per tenant.
VRF Architecture
At a high level VRF architecture can be described as below:
We have a parent Tier0 gateway to which multiple VRF connects.… Read More
Recently while working in my lab, I was trying to setup VRF Litein NSX-T 3.0, I came across a bug which was preventing me to turn-on BGP on VRF gateway via UI. This bug has affected 3.0.1 and 3.0.1.1 versions of NSX-T. The error which I was getting when trying to enable BGP was:
“Only ECMP, enabled, BGP Aggregate to be configured for VRF BGP Config”
After researching for a bit, I figured out that currently there is no resolution of this issue and API is the only method via which BGP can be turned on on VRF gateway. Below are the steps of doing so.
To enable BGP, we need to take complete API output from step-3 and change “enabled”: false to “enabled”: true and pass this output as payload of PATCH call.… Read More
For a freshly deployed NSX-T environment, you will find 2 default transport zones created:
nsx-overlay-transportzone
nsx-vlan-transportzone
These are system created TZ’s and thus they are marked as default.
You can consume these TZ’s or can create a new one as per your infrastructure. Default TZ’s can’t be deleted.
Any newly created transport zone doesn’t shows up as default. Also when creating a new TZ via UI, we don’t get any option to enable this flag. As of now this is possible only via API.
Creating a new Transport Zone with the “is_default” flag to true will work as intended, and the “is_default” flag will be removed from the system created TZ and the newly created TZ will be marked as the default one.
Note: We can modify a TZ and mark it as default post creation as well.
Let’s have a look at properties of system created transport zone.
To set a manually created TZ as default, we have to remove “is_default” flag from the system created TZ and then create a new TZ or modify existing TZ with this flag.… Read More
In my last post, I explained Egress/Ingress packet flow in a single-tier routing topology where logical segments are attached directly to T0 gateway.
In this article I will explain the same for a multi-tier routing topology in NSX-T.
Here is the topology which I have used in my lab.
Egress to Physical Network
Scenario: VM 1 with IP 192.168.10.2 is connected to logical segment App-LS and wants to communicate with a VM with IP 10.196.88.2 which is out there on physical network.
Step 1: VM 1 sends packet to its default gateway (192.168.10.1) which is LIF IP on T1-DR.
Step 2: T1 DR checks its forwarding table to make a routing decision. Since route to network 10.196.88.x doesn’t exist in forwarding table, T1-DR sends the packet to its default gateway (100.64.0.0) which is the DR instance of Tier-0 on the same hypervisor.
Step 3: The packet is sent to the T0 DR instance over internal segment (Router-Link). … Read More
In last post of NSX-T series, I demonstrated East-West packet flow and discussed various cases around that. In this post I will explain how packets are forwarded in case of northbound/southbound traffic.
Before you start reading this article, please ensure you have fair understanding of NSX-T routing architecture and how SR & DR component of logical router work together. Also knowledge of TEP/MAC/ARP table formation is handy when trying out packet flow in lab/prod.
Here is the lab topology that I am going to use to demonstrate N-S packet walk.
Note: Below topology is single-tier routing topology.
Egress to Physical Network
Here is how a packet traverse when VM 1 which is on App-LS logical segment tries to communicate with VM 2 which is out there on physical network.
Step 1: VM 1 sends layer 2 packet to its default gateway (192.168.10.1) which is a LIF on DR component on hypervisor node.… Read More
In my last post on the NSX-T series, I explained how VTEP, MAC & ARP table is constructed. This knowledge is needed to understand packet flow.
In this post, I will demonstrate how packet forwarding is performed for East-West traffic.
NSX-T has an inbuilt tool called Traceflow, which is very handy when analyzing packet flow within/across segments. This tool is located under Plan & Troubleshoot > Troubleshooting tools > Traceflow in NSX-T UI.
This tool is very easy to use and you just need to select the source vm & destination vm and click on trace to start packet flow analysis.
There are 2 deployment models available for T1 gateways. We can instantiate the T1 gateway on an edge cluster or we can choose not to associate with any edge cluster. If we need stateful services on the T1 gateway, we go with the first deployment model.
In part-1 of this post, I will demonstrate packet walk when T1 is associated with edge-cluster.… Read More
In this post I will explain how NSX-T create and maintains various table which forms the building block of logical switching. Basically I will discuss about formation of below tables:
VTEP Table
MAC Table
ARP Table
These tables are continuously updated and modified as we provision new workloads and create new segments.
VTEP Table
This table holds the VNI to TEP IP mapping. A couple of points before we start.
Each segment has a unique identifier called VNI.
Each transport node in that TZ will have a TEP IP.
Lets understand TEP table creation with the help of below diagram.
Step 1: As soon as a segment is created in a TZ, all transport node of that TZ updates its local TEP table and registers VNI of the created segment against its TEP IP. Each transport node then send this info to Local Control Plane (LCP).
Note: VTEP can be viewed by logging into ESXi host and running command: get logical-switch <ls-uuid> vtep-table
Step 2: Each transport nodes then send their VNI-TEP entry from its LCP to CCP (running on NSX-T Manager).… Read More