NSX-T federation is one of the new features introduced in NSX-T 3.0, and it allows you to manage multiple NSX-T Data Center environments with a single pane of glass. Federation allows you to stretch NSX-T deployments over multiple sites and/or towards the public cloud while keeping a single pane of management. Federation can be compared with the Cross-vCenter feature of NSX-V, where universal objects span more than one site.
NSX-T Federation Components/Architecture/Topology
With the NSX-T Federation, the concept of a Global Manager (GM) is introduced, which enables a single pane of glass for all connected NSX-T managers. A global manager is an NSX-T manager deployed with the role “Global”. Objects created from the Global Manager console are called global objects and pushed to the connected local NSX-T Managers.
The below diagram shows the high-level architecture of the NSX-T Federation
You can create networking constructs (T0/T1 gateways, segments, etc.) from Global Manager that can be stretched across one or more locations.… Read More
Before deploying Service Mesh, we need to create Compute & Network Profiles in both source & destination HCX environment. The order of Service Mesh deployment is as follows:
Create Network Profiles in source & destination HCX.
Create Compute Profiles in source & destination HCX.
Trigger Service Mesh deployment from on-prem (source) site.
In this post I will demonstrate HCX Service Mesh operations via API.
Some of the most common operation associated with service mesh can be:
Create profiles (Network & Compute) and deploy service mesh.
Update Network & Compute profiles to include/remove additional features.
Delete Network & Compute profiles.
Update Service Mesh to include/remove additional services.
Delete Service Mesh.
Let’s jump into lab and look at these operations in action.
Network Profile API
1: List Network Profiles: This API call list all the networks profiles that you have created to be used in service mesh.
Disclaimer: This post is only intended for MSP & Hyperscalers only. Also below content is based on my learnings and I encourage reader of this post to cross verify things with VMware before executing/implementing anything.
HCX is one of the key component in SDDC As a Service offering by hyperscalers (Google, Azure, CloudSimple, IBM, Oracle, Alibaba etc). HCX is consumed as SaaS offering in VMware SDDC’s on top of hyperscalers cloud. Automated deployment and configuration of HCX (cloud) is hyperscalers responsibility and this process becomes a bit complex when it comes to life-cycle management of HCX.
One of the challenges with HCX is Activation key management. An activation key can have many states including:
1: Available: This is state of a freshly generated Activation key by a MSP. Keys that are in available state can be used to activate HCX appliances (Cloud/Connector).
MSP/Hyperscalers can generate activation keys (via API) to activate Tenant HCX-Cloud appliance.… Read More
VMware HCX is ultimate choice when it comes to migrating workloads to VMware SDDC running in cloud or a secondary site. Various migration techniques available with HCX makes life easy when it comes to planning migration for different kind of workloads. There are workloads which can incur some downtime while migration, on the other hand there are critical business applications which needs to be functional during entire duration of migration.
Current Challenges With Workload Migration
The most difficult part in any migration technology is Planning and Scheduling Migration Waves (which workloads should be migrated and when). Divergence of workloads (legacy, cloud native, microservices) have made datacenters more complex than ever. On top of that lack of clear and current documentation detailing the application landscape adds greater anxiety with every scheduled migration wave.
Architects spends a fair amount of time to understand application dependencies and correlation by conducting exhaustive interviews with application owners.… Read More
Prior to Replication Assisted vMotion (RAV) feature, VMware HCX offered three distinct migration methodologies:
HCX Bulk Migration
HCX Cross-Cloud vMotion
HCX Cold Migration
I have explained about these methods in this blog post. Also working of these migration techniques are documented here and here.
Before jumping into what is Replication Assisted vMotion, let’s discuss about pros and cons of above mentioned techniques.
Bulk migration is resilient over network latencies and also allows multiple VMs to migrated simultaneously, but VM’s do incur a small downtime during final switchover.
HCX VMotion migration on the other hand, keeps the application workloads live during entire migration window; but is sensitive to network latencies and jitter. Also we have limitation of 2 vMotion migration per ESXi host at a time.
RAV feature brings best of both these options in the form of cloud vMotion with vSphere Replication.
What is HCX Replication Assisted vMotion?
RAV migration is a new type of migration offering from HCX.… Read More
Those who have worked on HCX, knows how easy it is to perform workload migrations from On-Prem datacenter to SDDC running in cloud. Bi-directional migration feature has helped customers to setup a true hybrid cloud infrastructure as app mobility was never so easy. To leverage HCX, your CSP will deploy HCX cloud on top of SDDC and provide you URL + Credentials, which you can feed into your on-prem HCX Connector appliance to get started with HCX.
In this post I am going to demonstrate workload migration between two HCX Cloud instances. In a Multi-Cloud world, vendor lock-in is a thing of past and more and more customers are now using/evaluating services of more than one cloud provider.
HCX cloud-to-cloud migration will be helpful in putting brakes on CSP’s who are monopolizing their service offerings. Customers can now easily evacuate a given cloud and can seamlessly migrate to another cloud without any hassle.… Read More
VMware Hybrid Cloud Extension (HCX) delivers secure and seamless app mobility and infrastructure hybridity across vSphere 5.0+ versions, on-premises and in the cloud. HCX can be considered a migration tool that abstracts both on-premises and cloud resources and presents them as a single set of resources that applications can consume. Using HCX, VMs can be migrated bi-directionally from one location to another with (almost) total disregard for vSphere versions, virtual networks, connectivity, etc.
One of the coolest features of HCX is the layer-2 extension feature, and in this post, I will be talking about the same.
HCX Network Extension
HCX’s network extension service provides a secure layer 2 extension capability (VLAN, VXLAN, and Geneve) for vSphere or 3rd party-distributed switches and allows the virtual machine to retain an IP/MAC address during migration. Layer 2 extension capability is provided through the HCX Network Extension appliance, which is deployed during Service Mesh creation and permits VMs to keep their IP & MAC addresses during a virtual machine migration. … Read More
In last post I covered the steps of configuring VRF gateways and attached Tier-1 gateway to VRF. In this post I am going to test my configuration to ensure things are working as expected.
Following configuration was done in vSphere prior to VRF validation:
Tenant A VM is deployed and connected to segment ‘Tenant-A-App-LS’ and have IP 172.16.70.2
Tenant B VM is deployed and connected to segment ‘Tenant-B-App-LS’ and have IP 172.16.80.2
Connectivity Test
To test connectivity, I first picked Tenant-A vm and performed following tests:
A: Pinged default gateway and got ping result.
B: Pinged default gateway of Tenant-B segment and got the result.
C: Pinged Tenant-B VM and got result.
D: Pinged a server on physical network and got ping response.
Same set of tests I performed for Tenant-B VM and all test results passed.
Traceflow
Traceflow is another way of testing connectivity between vm’s. Below are my traceflow results for the 2 vm’s:
Here is the topology diagram created by NSX-T to show path taken by packet from Tenant-A-App01 vm to Tenant-B-App01 vm.… Read More
NSX-T provides true multi-tenancy capabilities to a SDDC/Cloud infrastructure and there are various ways of achieving it based on the use cases. In the simplest deployment architecture, multi-tenancy is achieved via connecting various Tier1 gateways to a Tier-0 gateway and each T1 gateway can belong to a dedicated tenant. Another way of implementing this is to have multiple T0 gateways, where each tenant will have their dedicated T0.
Things have changed with NSX-T 3.0. One of the newest feature that was introduced in NSX-T 3.0 was VRF (virtual routing and forwarding) gateway aka VRF Lite.
VRF Lite allows us to virtualize the routing table on a T0 and provide tenant separation from a routing perspective. With VRF Lite we are able to configure per tenant data plane isolation all the way up to the physical network without creating Tier0 gateway per tenant.
VRF Architecture
At a high level VRF architecture can be described as below:
We have a parent Tier0 gateway to which multiple VRF connects.… Read More
Recently while working in my lab, I was trying to setup VRF Litein NSX-T 3.0, I came across a bug which was preventing me to turn-on BGP on VRF gateway via UI. This bug has affected 3.0.1 and 3.0.1.1 versions of NSX-T. The error which I was getting when trying to enable BGP was:
“Only ECMP, enabled, BGP Aggregate to be configured for VRF BGP Config”
After researching for a bit, I figured out that currently there is no resolution of this issue and API is the only method via which BGP can be turned on on VRF gateway. Below are the steps of doing so.
To enable BGP, we need to take complete API output from step-3 and change “enabled”: false to “enabled”: true and pass this output as payload of PATCH call.… Read More