NSX-T: Multi-Tier Routing Architecture

In my last post I discussed about single-tier routing architecture and demonstrated how T0 gateway is able to handle both East-West & North-South routing. In this post I will explain two-tier (aka multi-tier) routing architecture. 

If you are new to NSX-T, I would recommend reading previous blog post from my NSX-T 3.0 series to gain some understanding. 

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

Introduction

Two-Tier architecture is most common deployment method in production environment. It lays the foundation of Multi-Tenancy by separating T0 gateway (provider construct) from the T1 gateway (tenant construct).

In a multi-tenant environment, its the service provider who takes cares of deploying & configuring T0 gateway. Tenants are responsible for creating & managing their respective T1 gateways.

Logical Routing Connectivity

Let’s do a quick recap about components of  T0 & T1 gateway and how they interact with each other.Read More

NSX-T: Single-Tier Routing Architecture

In my NSX-t 3.0 series, I wrote an article on setting up Logical Routing so that traffic can start flowing through the SDDC.

If you have missed my NSX-T 3.0 series, here are the links to the same:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s do a quick recap on routing capabilities in NSX-T. 

Logical Routing in NSX-T has the ability to provide connectivity for both virtual and physical workloads that are in different logical L2 networks. Workloads get connected to each other via segments and these segments can in turn attach to a T0/T1 GW for East-West & North-South communication.

T0/T1 gateways have Service Router (SR) & Distributed Routers (DR). The DR component is embedded at the hypervisor level and ‘abstracted’ from the underlying physical network.Read More

NSX-T Tier-0 Gateway Inter-SR Routing Deep Dive

In my last Post i briefly talked about transit subnets that gets created when T1 gateway is attached to a T0 gateway. In this post we will learn in depth working of the SR components that gets deployed when we set up Logical Routing in NSX-T.

In this post we will learn about following:

  • Inter-SR Architecture
  • How to Enable Inter-SR routing
  • Ingress/Egress traffic patterns
  • Failure scenarios & remediation when an edge node losts northbound connectivity with upstream router

If you are new to NSX-T, then I would recommend reading my NSX-T series from below links:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

What is Tier-0 Inter-SR Routing?

Tier-0 gateway in active-active mode supports inter-SR iBGP. In active-active mode, the SR components form an internal connection between each other over a pre-defined NSX managed subnet 169.254.0.X/25. Read More

BGP Route Filtering in NSX-T

In last post of my NSX-T 3.0 series, I briefly talked about Route Re-Distribution feature. In this post I will try to explain it in more detail. We will learn when this feature should be used and when not.

If you have missed my NSX-T 3.0 series, here are the links to the same:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

When a Tier-1 GW is attached to a Tier-0 GW, a router link between the 2 gateways is created automatically. You can consider this link as transit segment which connects T1 GW with T0.

Default address space that get assigned on this transit subnet is 100.64.0.0/16. Router ports on T0 & T1 get IP address 100.64.0.0/31 & 100.64.0.1/31 respectively.

NSX-T-RRD01

NSX-T-RRD02

 

A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP.Read More

NSX-T 3.0 Series: Part 5-Configure Logical Routing

In the last post of this series, we learned about transport nodes and how to setup a data plane. Now my NSX-T environment is ready for setting up logical routing and eventually starts flowing packets across the network.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

Let’s get started. 

What is Logical Routing?

NSX logical routing, enable us to connect both virtual and physical endpoints that are located in different logical Layer 2 networks. This is made possible by the separation of physical network infrastructure from logical networks that network virtualization provides.

Logical routing is provided by Logical Routers that get created on Edge Nodes when we configure routing. Logical Routers are responsible for handling East-West & North-South traffic across the datacenter.Read More

NSX-T 3.0 Series: Part 4-Data Plane Setup

In last post of this series, we learnt about Transport Zones and why we need them. We also discussed about Transport Node profiles and created a TN profile and couple of Transport Zones. 

This post is focussed on components involved in data plane and how to configure the same in NSX-T.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

Let’s get started.

What is meant by Data Plane in NSX-T?

The data plane is where all packet forwarding takes place based on tables created by the control plane. Packet level stats are found here as well as topology info which is then reported from the data plane up to the control plane.

Data plane in NSX-T comprises of 2 components: Hosts and the Edge nodes.Read More

NSX-T 3.0 Series:Part 3- Transport Zones & Transport Node Profiles

In last post of this series, we learnt about uplink profiles and some design considerations about how to configure them. In this post we will learn about Transport Zones and Transport Node Profiles and I walk through steps of configuring the same.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

Let’s get started.

What is Transport Zone?

A transport zone is a logical container which controls which Hosts/VM’s can participate in a particular network by limiting what logical switches that a host can see. 

Segments aka logical switches when created, are attached to a transport zone. One logical switch can only be attached to one transport zone. So the host/cluster which is part of X transport zone where Y logical segment is attached, will be able to see those segments. Read More

NSX-T 3.0 Series:Part 2-Uplink Profiles

In first post of this series, we learnt how to deploy NSX-T managers to form the management & control plane. In this post we will learn about uplink profiles and their use cases.

What is uplink Profile?

An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches.

Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.

What settings we define on uplink profile?

The settings defined by uplink profiles include teaming policies, active/standby links, transport VLAN ID (ESXi TEP VLAN) and the MTU setting.

Before diving deep into uplink profiles, lets first discuss about various teaming policies that are available with uplink profiles. 

There are 3 teaming policies that can be configured while creating an uplink profile:

  • Failover Order: In this policy we specify one active uplink and one standby uplink.
Read More

NSX-T 3.0 Series:Part 1-Management & Control Plane Setup

NSX-T, since its birth has gained a lot of momentum in just couple of years and can be easily considered as VMware’s next generation product for multi-hypervisor environments, container deployments, and native workloads running in public cloud environments. NSX-T truly provides a scalable network virtualization and micro-segmentation platform.

This blog series is focussed more on implementation of NSX-T, rather than theoretical concepts. If you are new to NSX-T, I would highly recommend reading VMware’s official documentation

The first post of this series is focussed on deploying NSX-T Managers, which forms management & control plane setup, so its a good idea to have understanding of NSX-T Architecture before going ahead.

NSX-T manager can be deployed in following form factors:

nsx-t form factor

Note: Current version of NSX-T is 3.0.1 and can be downloaded from Here

In my lab I have a 4 node vSAN cluster and vSphere 7 installed. All my hosts are equipped with 2 10 physical NIC’s.Read More

VDS Profiles in VCF for Multi-VDS SDDC Bringup

Last week I tried my hands on latest release of VMware Cloud Foundation (4.0.1) and came across a cool feature where we can bringup a SDDC with Multi-VDS and Multi-NIC (more than 2) for traffic separation. This is one of the most asked feature request by VCF customers and finally its available.

What is VDS Profile and what problem it is solving?

VCF configuration workbook has now got a new configuration setting called “vSphere Distributed Switch Profile” and this setting is available under Hosts & Networks tab.

VDS profile allow you to deploy a SDDC with custom VDS design. In earlier versions of VCF, when you do a SDDC bringup, no matter how many physical nic’s your server’s has got, only 2 of them were being utilized in bringup.  The additional NIC’s were just laying waste there. 

Imagine you are a Cloud Service Provider, and you have invested heavily in servers with 4 or 6 NIC’s.Read More