NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

NSX-T Federation-Part 4: Configure Stretched Networking

Welcome to the fourth part of the NSX Federation series. In the last post, I talked about configuring local and global NSX-T managers to enable federation. In this post, I will show how we can leverage to configure stretched networking across sites. 

If you have missed the earlier post of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

3: Configure Federation

NSX-T Federation Topology

Before diving into the lab, I want to do a quick recap of the lab topology that I will be building in this post.

The following components in my lab are already built out :

1: Cross Link Router: This router is responsible for facilitating communication between Site-A & Site-B SDDC/NSX.

  • Site-A ToR01/02 are forming BGP neighborship with Cross Link Router and advertising necessary subnets to enable inter-site communication.
  • Site-B ToR01/02 are also BGP peering with the Cross Link Router and advertising subnets. 
Read More

NSX-T Federation-Part 3: Configure Federation

Welcome to the third post of the NSX Federation series. In part 1 of this series, I discussed the architecture of the NSX-T federation, and part 2 was focussed on my lab walkthrough.

In this post, I will show how to configure federation in NSX-T.

If you have missed the earlier posts of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

Let’s get started.

Federation Prerequisites

Before attempting to deploy and configure federation, you have to ensure that the following prerequisites are in place:

  • There must be a latency of 150 ms or less between sites.
  • Global Manager supports only Policy Mode. The federation does not support Manager Mode.
  • The Global Manager and all Local Managers must have NSX-T 3.0 installed.
  • NSX T Edge Clusters at each site are configured with RTEP IPs.
  • Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs.
Read More

NSX-T Federation-Part 2: Lab Setup

In the first part of the NSX-T federation series, I discussed about architecture and components of the federation and also discussed some use cases. In this post, I will explain my lab topology before diving into the NSX-T Federation configuration.

I am trying to setup a federation between 2 sites in my lab and on both sites, I have already deployed the following:

  • vSphere 7 (ESXi & vCenter).
  • vSAN for shared storage.
  • NSX-T 3.0 Manager.
  • NSX-T 3.0 Edges.

I have 4 ESXi hosts on each site and each ESXi has 4 physical NICs. All 4 NICs are connected to a trunked port on ToR.

The networking architecture in my lab is a mix of VDS + N-VDS. 

2 NICs from each host are participating in routing regular datacenter traffic (Mgmt, vSAN & vMotion). 

The 2 other 2 NIC’s are connected to N-VDS and it carries overlay traffic only. 

For edge networking, I am using multi-tep, single NVD-S architecture. Read More

NSX-T Federation-Part 1: Introduction & Architecture

NSX-T federation is one of the new features introduced in NSX-T 3.0, and it allows you to manage multiple NSX-T Data Center environments with a single pane of glass. Federation allows you to stretch NSX-T deployments over multiple sites and/or towards the public cloud while keeping a single pane of management. Federation can be compared with the Cross-vCenter feature of NSX-V, where universal objects span more than one site.

NSX-T Federation Components/Architecture/Topology

With the NSX-T Federation, the concept of a Global Manager (GM) is introduced, which enables a single pane of glass for all connected NSX-T managers. A global manager is an NSX-T manager deployed with the role “Global”. Objects created from the Global Manager console are called global objects and pushed to the connected local NSX-T Managers.

The below diagram shows the high-level architecture of the NSX-T Federation

You can create networking constructs (T0/T1 gateways, segments, etc.) from Global Manager that can be stretched across one or more locations.Read More

HCX Service Mesh Operations via API

Before deploying Service Mesh, we need to create Compute & Network Profiles in both source & destination HCX environment. The order of Service Mesh deployment is as follows:

  1. Create Network Profiles in source & destination HCX.
  2. Create Compute Profiles in source & destination HCX.
  3. Trigger Service Mesh deployment from on-prem (source) site.

In this post I will demonstrate HCX Service Mesh operations via API.

Some of the most common operation associated with service mesh can be:

  1. Create profiles (Network & Compute) and deploy service mesh.
  2. Update Network & Compute profiles to include/remove additional features.
  3. Delete Network & Compute profiles.
  4. Update Service Mesh to include/remove additional services.
  5. Delete Service Mesh. 

Let’s jump into lab and look at these operations in action.

Network Profile API

1: List Network Profiles: This API call list all the networks profiles that you have created to be used in service mesh. 

 

Note: Here objectId is the id of the various networks participating in network profiles

2: List Specific Network profile: This API call lists a specific network profile.Read More

HCX Activation Key Management for Hyperscalers

Disclaimer: This post is only intended for MSP & Hyperscalers only. Also below content is based on my learnings and I encourage reader of this post to cross verify things with VMware before executing/implementing anything. 

HCX is one of the key component in SDDC As a Service offering by hyperscalers (Google, Azure, CloudSimple, IBM, Oracle, Alibaba etc). HCX is consumed as SaaS offering in VMware SDDC’s on top of hyperscalers cloud. Automated deployment and configuration of HCX (cloud) is hyperscalers responsibility and this process becomes a bit complex when it comes to life-cycle management of HCX. 

One of the challenges with HCX is Activation key management. An activation key can have many states including:

1: Available: This is state of a freshly generated Activation key by a MSP. Keys that are in available state can be used to activate HCX appliances (Cloud/Connector).

MSP/Hyperscalers can generate activation keys (via API) to activate Tenant HCX-Cloud appliance.Read More

HCX Mobility Groups

VMware HCX is ultimate choice when it comes to migrating workloads to VMware SDDC running in cloud or a secondary site. Various migration techniques available with HCX makes life easy when it comes to planning migration for different kind of workloads. There are workloads which can incur some downtime while migration, on the other hand there are critical business applications which needs to be functional during entire duration of migration.

Current Challenges With Workload Migration

The most difficult part in any migration technology is Planning and Scheduling Migration Waves (which workloads should be migrated and when). Divergence of workloads (legacy, cloud native, microservices) have made datacenters more complex than ever. On top of that lack of clear and current documentation detailing the application landscape adds greater anxiety with every scheduled migration wave.

Architects spends a fair amount of time to understand application dependencies and correlation by conducting exhaustive interviews with application owners.Read More

VMware HCX Replication Assisted vMotion

Prior to Replication Assisted vMotion (RAV) feature, VMware HCX offered three distinct migration methodologies:

  • HCX Bulk Migration
  • HCX Cross-Cloud vMotion
  • HCX Cold Migration

I have explained about these methods in this blog post. Also working of these migration techniques are documented here and here.

Before jumping into what is Replication Assisted vMotion, let’s discuss about pros and cons of above mentioned techniques. 

  • Bulk migration is resilient over network latencies and also allows multiple VMs to migrated simultaneously, but VM’s do incur a small downtime during final switchover.
  • HCX VMotion migration on the other hand, keeps the application workloads live during entire migration window; but is sensitive to network latencies and jitter. Also we have limitation of 2 vMotion migration per ESXi host at a time.

RAV feature brings best of both these options in the form of cloud vMotion with vSphere Replication.

What is HCX Replication Assisted vMotion?

RAV migration is a new type of migration offering from HCX.Read More

VMware HCX: Cloud to Cloud Migration

Those who have worked on HCX, knows how easy it is to perform workload migrations from On-Prem datacenter to SDDC running in cloud. Bi-directional migration feature has helped customers to setup a true hybrid cloud infrastructure as app mobility was never so easy. To leverage HCX, your CSP will deploy HCX cloud on top of SDDC and provide you URL + Credentials, which you can feed into your on-prem HCX Connector appliance to get started with HCX. 

In this post I am going to demonstrate workload migration between two HCX Cloud instances. In a Multi-Cloud world, vendor lock-in is a thing of past and more and more customers are now using/evaluating services of more than one cloud provider.

HCX cloud-to-cloud migration will be helpful in putting brakes on CSP’s who are monopolizing their service offerings. Customers can now easily evacuate a given cloud and can seamlessly migrate to another cloud without any hassle.Read More