NSX-T Tier-0 Gateway Inter-SR Routing Deep Dive

In my last Post i briefly talked about transit subnets that gets created when T1 gateway is attached to a T0 gateway. In this post we will learn in depth working of the SR components that gets deployed when we set up Logical Routing in NSX-T.

In this post we will learn about following:

  • Inter-SR Architecture
  • How to Enable Inter-SR routing
  • Ingress/Egress traffic patterns
  • Failure scenarios & remediation when an edge node losts northbound connectivity with upstream router

If you are new to NSX-T, then I would recommend reading my NSX-T series from below links:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

What is Tier-0 Inter-SR Routing?

Tier-0 gateway in active-active mode supports inter-SR iBGP. In active-active mode, the SR components form an internal connection between each other over a pre-defined NSX managed subnet 169.254.0.X/25. Read More

BGP Route Filtering in NSX-T

In last post of my NSX-T 3.0 series, I briefly talked about Route Re-Distribution feature. In this post I will try to explain it in more detail. We will learn when this feature should be used and when not.

If you have missed my NSX-T 3.0 series, here are the links to the same:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

5: Configure Logical Routing in NSX-T

Let’s get started.

When a Tier-1 GW is attached to a Tier-0 GW, a router link between the 2 gateways is created automatically. You can consider this link as transit segment which connects T1 GW with T0.

Default address space that get assigned on this transit subnet is 100.64.0.0/16. Router ports on T0 & T1 get IP address 100.64.0.0/31 & 100.64.0.1/31 respectively.

NSX-T-RRD01

NSX-T-RRD02

 

A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP.Read More

NSX-T 3.0 Series: Part 5-Configure Logical Routing

In the last post of this series, we learned about transport nodes and how to setup a data plane. Now my NSX-T environment is ready for setting up logical routing and eventually starts flowing packets across the network.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

4: NSX-T Data Plane Setup

Let’s get started. 

What is Logical Routing?

NSX logical routing, enable us to connect both virtual and physical endpoints that are located in different logical Layer 2 networks. This is made possible by the separation of physical network infrastructure from logical networks that network virtualization provides.

Logical routing is provided by Logical Routers that get created on Edge Nodes when we configure routing. Logical Routers are responsible for handling East-West & North-South traffic across the datacenter.Read More

NSX-T 3.0 Series: Part 4-Data Plane Setup

In last post of this series, we learnt about Transport Zones and why we need them. We also discussed about Transport Node profiles and created a TN profile and couple of Transport Zones. 

This post is focussed on components involved in data plane and how to configure the same in NSX-T.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

3: Transport Zones & Transport Node Profiles

Let’s get started.

What is meant by Data Plane in NSX-T?

The data plane is where all packet forwarding takes place based on tables created by the control plane. Packet level stats are found here as well as topology info which is then reported from the data plane up to the control plane.

Data plane in NSX-T comprises of 2 components: Hosts and the Edge nodes.Read More

NSX-T 3.0 Series:Part 3- Transport Zones & Transport Node Profiles

In last post of this series, we learnt about uplink profiles and some design considerations about how to configure them. In this post we will learn about Transport Zones and Transport Node Profiles and I walk through steps of configuring the same.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this blog series:

1: NSX-T Management & Control Plane Setup

2: Uplink Profiles in NSX-T

Let’s get started.

What is Transport Zone?

A transport zone is a logical container which controls which Hosts/VM’s can participate in a particular network by limiting what logical switches that a host can see. 

Segments aka logical switches when created, are attached to a transport zone. One logical switch can only be attached to one transport zone. So the host/cluster which is part of X transport zone where Y logical segment is attached, will be able to see those segments. Read More

NSX-T 3.0 Series:Part 2-Uplink Profiles

In first post of this series, we learnt how to deploy NSX-T managers to form the management & control plane. In this post we will learn about uplink profiles and their use cases.

What is uplink Profile?

An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches.

Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.

What settings we define on uplink profile?

The settings defined by uplink profiles include teaming policies, active/standby links, transport VLAN ID (ESXi TEP VLAN) and the MTU setting.

Before diving deep into uplink profiles, lets first discuss about various teaming policies that are available with uplink profiles. 

There are 3 teaming policies that can be configured while creating an uplink profile:

  • Failover Order: In this policy we specify one active uplink and one standby uplink.
Read More

NSX-T 3.0 Series:Part 1-Management & Control Plane Setup

NSX-T, since its birth has gained a lot of momentum in just couple of years and can be easily considered as VMware’s next generation product for multi-hypervisor environments, container deployments, and native workloads running in public cloud environments. NSX-T truly provides a scalable network virtualization and micro-segmentation platform.

This blog series is focussed more on implementation of NSX-T, rather than theoretical concepts. If you are new to NSX-T, I would highly recommend reading VMware’s official documentation

The first post of this series is focussed on deploying NSX-T Managers, which forms management & control plane setup, so its a good idea to have understanding of NSX-T Architecture before going ahead.

NSX-T manager can be deployed in following form factors:

nsx-t form factor

Note: Current version of NSX-T is 3.0.1 and can be downloaded from Here

In my lab I have a 4 node vSAN cluster and vSphere 7 installed. All my hosts are equipped with 2 10 physical NIC’s.Read More

VDS Profiles in VCF for Multi-VDS SDDC Bringup

Last week I tried my hands on latest release of VMware Cloud Foundation (4.0.1) and came across a cool feature where we can bringup a SDDC with Multi-VDS and Multi-NIC (more than 2) for traffic separation. This is one of the most asked feature request by VCF customers and finally its available.

What is VDS Profile and what problem it is solving?

VCF configuration workbook has now got a new configuration setting called “vSphere Distributed Switch Profile” and this setting is available under Hosts & Networks tab.

VDS profile allow you to deploy a SDDC with custom VDS design. In earlier versions of VCF, when you do a SDDC bringup, no matter how many physical nic’s your server’s has got, only 2 of them were being utilized in bringup.  The additional NIC’s were just laying waste there. 

Imagine you are a Cloud Service Provider, and you have invested heavily in servers with 4 or 6 NIC’s.Read More

VCD Container Service Extension Series-Part 4: Tenant Onboarding & K8 Cluster Deployment

In last post of this series, we learn how to install and integrate CSE plugin with VCD for easier management of Kubernetes container. In this post we will learn how tenants can leverage CSE plugin to deploy K8 clusters.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

3: CSE Plugin Integration With VCD

Onboarding Tenants

Before a tenant can start provisioning K8 cluster from CLI or UI (via CSE plugin), we need to enable the tenant to do so. This can be done directly from CSE server or login to any machine where vcd-cli utility is installed. To onboard a tenant, use following commands:

Note: These commands needs to be run as VCD system admin. 

# vcd login vcd.vstellar.local system admin -iw

# vcd right add -o <org-name> “{cse}:CSE NATIVE DEPLOY RIGHT”

Example: # vcd right add -o cse_org “{cse}:CSE NATIVE DEPLOY RIGHT”

Rights added to the Org ‘cse_org’

Note: At this point of time, if we run command vcd cse ovdc list, it will show us no K8 provider has been configured for the tenants.Read More

VCD Container Service Extension Series-Part 3: CSE Plugin Integration With VCD

In last post of this series, I explained how to set up CSE server. In this post we will look at steps of integrating CSE plugin in VMware Cloud Director, so that tenants can spin K8’s cluster from VCD portal.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

Latest and greatest version of CSE plugin can be downloaded from Here

CSE plugin installation is taken care by Cloud Provider. Post installation, provider can choose to publish plugin to all/specific tenants.

Login to VCD as system admin and navigate to Home > More > Plugins page. 

CSE-Plugin01

Click on upload button to start the wizard. Clicking on Select Plugin File allow you to browse to location where plugin file is downloaded.

CSE-Plugin02

Select the scope of publishing CSE plugin. Service provider can publish this plugin to all or subset of tenants. Read More