HCX Network Extension

VMware Hybrid Cloud Extension (HCX) delivers secure and seamless app mobility and infrastructure hybridity across vSphere 5.0+ versions, on-premises and in the cloud. HCX can be considered a migration tool that abstracts both on-premises and cloud resources and presents them as a single set of resources that applications can consume. Using HCX, VMs can be migrated bi-directionally from one location to another with (almost) total disregard for vSphere versions, virtual networks, connectivity, etc. 

One of the coolest features of HCX is the layer-2 extension feature, and in this post, I will be talking about the same.

HCX Network Extension

HCX’s network extension service provides a secure layer 2 extension capability (VLAN, VXLAN, and Geneve) for vSphere or 3rd party-distributed switches and allows the virtual machine to retain an IP/MAC address during migration. Layer 2 extension capability is provided through the HCX Network Extension appliance, which is deployed during Service Mesh creation and permits VMs to keep their IP & MAC addresses during a virtual machine migration. 

VMs that are created on the extended segment at the remote site form Layer 2 adjacency with VMs placed on the origin network. The default gateway for extended networks exists at the origin site, and when VMs connected to the extended network are migrated to the target site, VMs will communicate back to the source datacenter for all communication because the gateway still exists at the source datacenter. 

Note: While NSX is not required for L2 extension in on-premises infrastructure, we can still extend NSX overlay networks in addition to conventional VLAN-backed portgroups if the on-premises environment has an NSX (V or T) footprint.

Can I just extend any network using HCX?

The answer to this question is no. Certain limitations are in place with the HCX L2 extension. These are:

  • vSphere infrastructure VMkernel portgroups cannot be extended. HCX should only be used to extend virtual machine networks.
  • HCX Network Profile networks cannot be extended.
  • Trunk networks cannot be extended.
  • Untagged vSphere networks cannot be extended.

HCX Network Extension Architecture

The below diagram shows a high-level architecture view of the Network Extension.

Graphic Thanks to hcx.design (Gabe Rosas)

HCX Network Extension VMs are deployed as a pair, one on the source site and one on the destination site. They form an IPsec tunnel, which serves as the transport for traffic to cross the tunnel. When one stretches a source network, a VXLAN (if NSX V) or Overlay (if NSX T) network is automatically created and mapped to the source network.

There is a one-to-one mapping between the extended network and the HCX NE VM. There is no load balancing of an extended network between two separate HCX NE VMs on a single site. HCX NE supports scalability, and we can have multiple NE VMs deployed in a service mesh.

  • For every VLAN-backed source network that is stretched, the HCX NE VM will add 1 vNic to that VLAN distributed portgroup.
  • For every VXLAN/GENEVE backed source network that is stretched, HCX will aggregate all VXLAN network stretches assigned to that HCX NE VM into a single trunk vNic.

Let’s dive into the lab and test this awesome feature.

My Lab Setup: I have deployed 2 nested SDDCs in my lab. One of the SDDCs is acting as an on-premises infrastructure and the other as a cloud site.

On-Prem SDDC

I have deployed HCX-Connector in my on-prem sddc and HCX-Cloud in other sddc. Also, site pairing between the 2 HCX instances is done.

 

I have created a service mesh as well, and it has deployed the IX and NE appliances. Tunnel status for both reporting up.

I created a network named “Test-NW” backed by a subnet (172.16.11.0/24) and attached a test VM to it. This vm has IP address 172.16.11.2.

Cloud Site SDDC

I have verified that IX and NE appliances are deployed on the cloud side as well, and the tunnel status for both appliances is up.

Note: Until HCX R143, it is not possible to create a service mesh if you only have a CVDS-based SDDC. Your cloud SDDC needs to have NVDS configured.

In my lab, I have 4 NICs per host, and 2 NICs are attached to regular VDS and 2 are attached to N-VDS, through which all overlay traffic is flowing.

Testing Scenario

Test-NW is a regular VLAN-backed dvportgroup, which I will be extending to the cloud site. Once the network is extended, I will perform a vMotion to Cloud on Test-VM and verify connectivity checks during migration. My goal is to demonstrate that there will be no connectivity loss when the VM permanently moves to the cloud site. 

To perform network extension, login to the on-prem vSphere Client, and from the main menu, click on the HCX plugin to launch the HCX UI and navigate to the Network Extension page.

Click on the Create a Network Extension button.

Select the network that you want to extend and click Next.

 

Punch in the gateway address for the network that you are extending. Please ensure to use the same IP address that is configured as a gateway address for that network.

Also, select the T1 router from the cloud site to which the extended network will connect and click on submit to start the network stretch process.

 

Below tasks are performed in the backend when network stretch is in progress.

After 2-3 minutes, the network stretch task is completed, and the status reports as Extension complete. 

You can find info about the extended network from the same screen.

The network that you have extended gets connected as vNIC on the HCX NE appliance.

The same operation is performed in cloud side HCX NE as well.

Also on the cloud side, you will notice that the extended network is created as a logical segment and gets attached to T1-GW selected during the network stretch task.

Now if you try to perform vMotion-based migration on a vm that is connected to the extended network, you will see target network mapping already done for you in the migration wizard.

When migration is about to complete, you might see a few pings getting missed and a rise in latency. 

Source vm will be unregistered from on-prem sddc and it will start running in cloud sddc.

And that’s it for this post. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a ReplyCancel reply