NSX-T Tier-0 Gateway Inter-SR Routing Deep Dive

In my last Post i briefly talked about transit subnets that gets created when T1 gateway is attached to a T0 gateway. In this post we will learn in depth working of the SR components that gets deployed when we set up Logical Routing in NSX-T.

In this post we will learn about following:

  • Inter-SR Architecture
  • How to Enable Inter-SR routing
  • Ingress/Egress traffic patterns
  • Failure scenarios & remediation when an edge node losts northbound connectivity with upstream router

If you are new to NSX-T, then I would recommend reading my NSX-T series from below links:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

What is Tier-0 Inter-SR Routing?

Tier-0 gateway in active-active mode supports inter-SR iBGP. In active-active mode, the SR components form an internal connection between each other over a pre-defined NSX managed subnet 169.254.0.X/25. Read More

BGP Route Filtering in NSX-T

In last post of my NSX-T 3.0 series, I briefly talked about Route Re-Distribution feature. In this post I will try to explain it in more detail. We will learn when this feature should be used and when not.

If you have missed my NSX-T 3.0 series, here are the links to the same:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

When a Tier-1 GW is attached to a Tier-0 GW, a router link between the 2 gateways is created automatically. You can consider this link as transit segment which connects T1 GW with T0.

Default address space that get assigned on this transit subnet is 100.64.0.0/16. Router ports on T0 & T1 get IP address 100.64.0.0/31 & 100.64.0.1/31 respectively.

 

A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP.Read More

NSX-T 3.0 Series: Part 5-Configure Logical Routing

In the last post of this series, we learned about transport nodes and how to setup a data plane. Now my NSX-T environment is ready for setting up logical routing and eventually starts flowing packets across the network.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

Let’s get started. 

What is Logical Routing?

NSX logical routing, enable us to connect both virtual and physical endpoints that are located in different logical Layer 2 networks. This is made possible by the separation of physical network infrastructure from logical networks that network virtualization provides.

Logical routing is provided by Logical Routers that get created on Edge Nodes when we configure routing. Logical Routers are responsible for handling East-West & North-South traffic across the datacenter.Read More

NSX-T 3.0 Series: Part 4-Data Plane Setup

In last post of this series, we learnt about Transport Zones and why we need them. We also discussed about Transport Node profiles and created a TN profile and couple of Transport Zones. 

This post is focussed on components involved in data plane and how to configure the same in NSX-T.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

Let’s get started.

What is meant by Data Plane in NSX-T?

The data plane is where all packet forwarding takes place based on tables created by the control plane. Packet level stats are found here as well as topology info which is then reported from the data plane up to the control plane.

Data plane in NSX-T comprises of 2 components: Hosts and the Edge nodes.Read More

NSX-T 3.0 Series:Part 3- Transport Zones & Transport Node Profiles

In last post of this series, we learnt about uplink profiles and some design considerations about how to configure them. In this post we will learn about Transport Zones and Transport Node Profiles and I walk through steps of configuring the same.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

Let’s get started.

What is Transport Zone?

A transport zone is a logical container which controls which Hosts/VM’s can participate in a particular network by limiting what logical switches that a host can see. 

Segments aka logical switches when created, are attached to a transport zone. One logical switch can only be attached to one transport zone. So the host/cluster which is part of X transport zone where Y logical segment is attached, will be able to see those segments. Read More

NSX-T 3.0 Series:Part 2-Uplink Profiles

In first post of this series, we learnt how to deploy NSX-T managers to form the management & control plane. In this post we will learn about uplink profiles and their use cases.

What is uplink Profile?

An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches.

Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.

What settings we define on uplink profile?

The settings defined by uplink profiles include teaming policies, active/standby links, transport VLAN ID (ESXi TEP VLAN) and the MTU setting.

Before diving deep into uplink profiles, lets first discuss about various teaming policies that are available with uplink profiles. 

There are 3 teaming policies that can be configured while creating an uplink profile:

  • Failover Order: In this policy we specify one active uplink and one standby uplink.
Read More

NSX-T 3.0 Series:Part 1-Management & Control Plane Setup

NSX-T, since its birth has gained a lot of momentum in just couple of years and can be easily considered as VMware’s next generation product for multi-hypervisor environments, container deployments, and native workloads running in public cloud environments. NSX-T truly provides a scalable network virtualization and micro-segmentation platform.

This blog series is focussed more on implementation of NSX-T, rather than theoretical concepts. If you are new to NSX-T, I would highly recommend reading VMware’s official documentation

The first post of this series is focussed on deploying NSX-T Managers, which forms management & control plane setup, so its a good idea to have understanding of NSX-T Architecture before going ahead.

NSX-T manager can be deployed in following form factors:

Note: Current version of NSX-T is 3.0.1 and can be downloaded from Here

In my lab I have a 4 node vSAN cluster and vSphere 7 installed. All my hosts are equipped with 2 10 physical NIC’s.Read More

VMware Cloud Director-What’s New-NSX-T UI Enhancements

With the release of VMware Cloud Director (Previously vCloud Director), a lot of NSX-T related UI enhancements are added. In this post I will walk through some of them.

Dedicated External Networks

With Cloud Director 10.1, an edge gateway can be provisioned with a dedicated external network. In this configuration, there is a one-to-one relationship between the external network and the edge gateway, and no other edge gateways can connect to this external network.

Note: Provider creates a T0 gateway within NSX-T and add it to Cloud Director as an external network. Once T0 is added, provider could convert an existing org gateway (T1) to this new dedicated T0, or create a new org gateway with Dedicated External Network option selected.

BGP and Route Advertisement

BGP peering  & Route Advertisement functionalities are added on Edge Gateway UI.

Route Advertisement

You can decide which of the network subnets that are attached to org gateway will be advertised to the dedicated external network.Read More

Troubleshooting NSX Host Preparation Error “Agency Already Exist For Cluster”

Yesterday while setting up my lab for NSX-V deployment, I encountered an issue with host preparation and it failed with error “Agency 3d62d2da-5e93-4f57-a87c-063a7af3be28 already exist for cluster Mgmt-Cluster. Delete this agency from EAM database

I past I had NSX-V configured in my cluster and sometimes back I uninstalled NSX-V components and was playing with NSX-T and later uninstalled NSX-T as well. I guess the uninstall was not clean and left behind lingering item in EAM database.

To verify this I navigated to Administration > vCenter Server Extensions > vSphere ESX Agent Manager > Configure tab and can see the dirty agency related to NSX fabric there. Error message was loud and clear that host needs to be put in maintenance mode to complete the VIB installation.

I checked presence of NSX VIB on host and found it was there already (from old installation)

[root@mgmt-esxi03:~] esxcli software vib list | grep nsx
esx-nsxv 6.5.0-0.0.11070622 VMware VMwareCertified 2019-02-02

I tried deleteing the offending agency by clicking on the 3 dotted vertical line and selecting Delete Agency, but it did not helped me.… Read More

NSX Guest Introspection: Components & Configuration

What is NSX Guest  Introspection ?

VMware NSX Guest Introspection is a security feature which when enabled, offloads antivirus and anti-malware agent processing to a dedicated virtual appliance (service vm’s). 

When Guest Introspection is enabled on a cluster, it continuously update antivirus signatures, thus giving uninterrupted protection to the virtual machines running in that cluster. New virtual machines that are created (or existing virtual machines that went offline) are immediately protected with the most current antivirus signatures when they come online.

Components of NSX Guest Introspection

The main components of Guest Introspection are: 

1: Guest VM Thin Agent: This is installed as part of the VMware Tools driver. It intercepts Guest VM file/OS events and passes them to ESXi Host.

2: MUX Module: When Guest Introspection is installed on a cluster, NSX installs a new VIB (epsec-mux) on each host of that cluster. The new VIB is responsible for receiving messages from the Thin Agent running in the guest VM’s and passing the information to the Service Virtual Machine via a TCP session.Read More