NSX-T Federation-Part 4: Configure Stretched Networking

Welcome to the fourth part of the NSX Federation series. In the last post, I talked about configuring local and global NSX-T managers to enable federation. In this post, I will show how we can leverage to configure stretched networking across sites. 

If you have missed the earlier post of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

3: Configure Federation

NSX-T Federation Topology

Before diving into the lab, I want to do a quick recap of the lab topology that I will be building in this post.

The following components in my lab are already built out :

1: Cross Link Router: This router is responsible for facilitating communication between Site-A & Site-B SDDC/NSX.

  • Site-A ToR01/02 are forming BGP neighborship with Cross Link Router and advertising necessary subnets to enable inter-site communication.
  • Site-B ToR01/02 are also BGP peering with the Cross Link Router and advertising subnets. 
Read More

NSX-T Federation-Part 3: Configure Federation

Welcome to the third post of the NSX Federation series. In part 1 of this series, I discussed the architecture of the NSX-T federation, and part 2 was focussed on my lab walkthrough.

In this post, I will show how to configure federation in NSX-T.

If you have missed the earlier posts of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

Let’s get started.

Federation Prerequisites

Before attempting to deploy and configure federation, you have to ensure that the following prerequisites are in place:

  • There must be a latency of 150 ms or less between sites.
  • Global Manager supports only Policy Mode. The federation does not support Manager Mode.
  • The Global Manager and all Local Managers must have NSX-T 3.0 installed.
  • NSX T Edge Clusters at each site are configured with RTEP IPs.
  • Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs.
Read More

NSX-T Federation-Part 2: Lab Setup

In the first part of the NSX-T federation series, I discussed about architecture and components of the federation and also discussed some use cases. In this post, I will explain my lab topology before diving into the NSX-T Federation configuration.

I am trying to setup a federation between 2 sites in my lab and on both sites, I have already deployed the following:

  • vSphere 7 (ESXi & vCenter).
  • vSAN for shared storage.
  • NSX-T 3.0 Manager.
  • NSX-T 3.0 Edges.

I have 4 ESXi hosts on each site and each ESXi has 4 physical NICs. All 4 NICs are connected to a trunked port on ToR.

The networking architecture in my lab is a mix of VDS + N-VDS. 

2 NICs from each host are participating in routing regular datacenter traffic (Mgmt, vSAN & vMotion). 

The 2 other 2 NIC’s are connected to N-VDS and it carries overlay traffic only. 

For edge networking, I am using multi-tep, single NVD-S architecture. Read More

NSX-T Federation-Part 1: Introduction & Architecture

NSX-T federation is one of the new features introduced in NSX-T 3.0, and it allows you to manage multiple NSX-T Data Center environments with a single pane of glass. Federation allows you to stretch NSX-T deployments over multiple sites and/or towards the public cloud while keeping a single pane of management. Federation can be compared with the Cross-vCenter feature of NSX-V, where universal objects span more than one site.

NSX-T Federation Components/Architecture/Topology

With the NSX-T Federation, the concept of a Global Manager (GM) is introduced, which enables a single pane of glass for all connected NSX-T managers. A global manager is an NSX-T manager deployed with the role “Global”. Objects created from the Global Manager console are called global objects and pushed to the connected local NSX-T Managers.

The below diagram shows the high-level architecture of the NSX-T Federation

You can create networking constructs (T0/T1 gateways, segments, etc.) from Global Manager that can be stretched across one or more locations.Read More

HCX Service Mesh Operations via API

Before deploying Service Mesh, we need to create Compute & Network Profiles in both source & destination HCX environment. The order of Service Mesh deployment is as follows:

  1. Create Network Profiles in source & destination HCX.
  2. Create Compute Profiles in source & destination HCX.
  3. Trigger Service Mesh deployment from on-prem (source) site.

In this post I will demonstrate HCX Service Mesh operations via API.

Some of the most common operation associated with service mesh can be:

  1. Create profiles (Network & Compute) and deploy service mesh.
  2. Update Network & Compute profiles to include/remove additional features.
  3. Delete Network & Compute profiles.
  4. Update Service Mesh to include/remove additional services.
  5. Delete Service Mesh. 

Let’s jump into lab and look at these operations in action.

Network Profile API

1: List Network Profiles: This API call list all the networks profiles that you have created to be used in service mesh. 

 

Note: Here objectId is the id of the various networks participating in network profiles

2: List Specific Network profile: This API call lists a specific network profile.Read More