Learning NSX-T-Part 3: NSX Manager Deployment

In last post of this series we discussed about NSX-T architecture. In this post we will be deploying the NSX-T components in lab. 

Lets start with deploying NSX manager first to form the management plane. NSX manager is deployed via ova file which can be downloaded from VMware website.

The current version of NSX-T is 2.2.0 and it can be downloaded from here

Please refer NSX-T 2.2 Installation Guide before going ahead with deployment.

NSX-T 2.2.0 supports following hypervisor versions:

  • vSphere 6.5/6.5 U1/6.5 U2
  • RHEL KVM 7.3
  • Ubuntu KVM 16.04 

NSX manager deployment is pretty straight forward like any standard virtual appliance deployment. Steps are shown in screenshot below. 

For more information on NSX Manager installation, please see this article

Once the NSX Manager boots up, verify that the IP address set during deployment was applied as expected.

nsxt-12.PNG

Also you can try to ping NSX-T from vCenter server and Esxi host to verify its connectivity. Read More

Learning NSX-T-Part 2: NSX-T Architecture

As we discussed in first post of this series that NSX-T was born to meet the demands of the containerized workload, multi-hypervisor and multi-cloud.

The best use case that you can think of NSX-T is that it provides seamless connectivity and security services for all types of endpoints including virtual machines, containers and bare metal. It doesn’t really matter where these endpoints are. It could be in your on-prem datacenter, a remote office or in the cloud.

In this post we will look how NSX-T architecture looks like. 

Like NSX-V, NSX-T too contains a management plane, data plane and a control plane. Lets discuss about each plane individually here.

Data Plane

  • NSX-T uses in-kernel modules for ESXi and KVM hypervisors for constructing data plane. 
  • Since NSX-T is decoupled from vSphere, it don’t rely on vSphere vSwitch for network connectivity. NSX-T data plane introduces a host switch called N-VDS (NSX Managed Virtual Distributed Switch).
Read More

Learning NSX-T-Part 1: Introduction

VMware NSX is one of the most sensational products that VMware produced 5 years ago, after the Niciria acquisition, and over the years, this product has just gotten better and better. NSX revolutionized the SDDC by adding SDN capabilities and changing the way SDN was used before NSX. 

One of the major limitations of NSX-V was that it could be used only with vSphere and not with other platforms, and customers were continuously demanding a version of NSX that could be integrated with non-vSphere platforms.

To overcome this challenge, VMware came up with NSX-T, which is a version of NSX that supports both vSphere and non-vSphere-based infrastructure. This version of NSX can be integrated with other hypervisors, such as KVM, and application frameworks, such as Redhat Openshift, Docker/Containers, and Pivotal.

As we know, in NSX-V, vCenter was a centralized management plane, but NSX-T has its own management interface. As of now, NSX-T doesn’t offer the same full feature set as NSX-V, but VMware is continuously making enhancements to this product to make it more robust.Read More

Locating HCX System ID

An HCX System ID is needed when you are working with the VMware support team regarding any HCX issues. 

The HCX system ID can be found via the CLI as well as the GUI. We will discuss both methods in this post.

CLI Method (You can only find on-prem HCX system ID using this method)

Connect to the on-prem HCX ENT appliance via console or SSH using admin credentials and run the command: cat /common/location

hcx-sysid0.PNG

GUI Method

Login to vSphere Web Client, click on the HCX plugin, and navigate to Administration > System Updates

Under the Info column, click on the ‘i’ icon, and it will show you the system ID, which you can copy to your clipboard. 

hcx-sysid.PNG

Do the same to obtain the remote HCX system ID.Read More

Learning HCX-Part 11: Testing DR With HCX

In last post of this series we performed a reverse migration and brought a VM back to on-prem from cloud. Now we have tested all migration method and have basic understanding of how they work, lets move forward to explore Disaster Recovery capabilities provided by HCX.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

9: Testing HCX Bulk Migration

10: HCX Reverse Migration

About HCX Disaster Recovery

HCX Disaster Recovery is a service intended to protect virtual workloads managed by VMware vSphere that are either deployed in a private or a public cloud.

HCX DR offers following benefits:

  • Simple and easy to use management platform that allows secure (enterprise to cloud and cloud to cloud) asynchronous replication and recovery of virtual machines.
Read More

Learning HCX-Part 10: HCX Reverse Migration

In last post of this series we learnt about Bulk Migration feature of HCX. In this post we will learn about HCX Reverse Migration.

Reverse migration provides you the ability to migrate VMs back from your cloud infrastructure to your on-premises environment using the HCX migration methods (No downtime/Cold migration/Bulk migration).

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

9: Testing HCX Bulk Migration

Reverse migration use cases

There are 2 or 3 use cases which I can think as of now for Reverse migration.

1: You have transferred a VM to cloud and later found that vm is not suitable to run in cloud environment and you are facing serious performance issues.Read More

Learning HCX-Part 9: Testing HCX Bulk Migration

In last post we tested the Cross-Cloud vMotion feature of HCX. In this post we will be testing the bulk migration feature.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

To trigger bulk migration feature, click on Migrate Virtual machines option.

bulk-migration1

Fill in the default values for container selection etc and select migration type as bulk migration.

bulk-migration2

Click on (Schedule Failover) to specify the switchover time. 

Select the start and end time.

bulk-migration3

Select the VM’s which will be migrated to cloud. You can change migration type for individual VM’s. For e.g you can choose few VM’s for bulk migration and few for vMotion or Cold migration. Read More

Learning HCX-Part 8: Testing HCX Cross Cloud Migration

In last post of this series we discussed about various migration methods that are available with HCX. In this post we will be testing the cross-cloud migration method in lab. 

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

To configure cross-cloud migration for VM’s, login to vCenter Web Client and click on HCX plugin from home page and go to Migration tab. Click on Migrate Virtual machines.

hcx-migration1

The selections made default options are automatically applied on all VM’s selected for migration. For e.g migrated vm’s will be sitting in which compute cluster/datastore/folder etc. You can override this at per vm level as well. 

There are various other options available as well such as Upgrade virtual hardware, Upgrade VMware tools etc.Read More

Learning HCX-Part 7: HCX Migration Methods

In last post of this series we deployed the Cloud Gateway and the Layer 2 Concentrator virtual appliances. Next is to explore various migration methods to migrate workloads from on-prem datacenter to the cloud.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

HCX enables bidirectional virtual machine mobility. Virtual Machines can be migrated to/from an HCX-enabled target site. The migration capability for both live (Powered-on) and cold (Powered-off) virtual machines. Following are the migration methods supported by HCX:

HCX No-Downtime aka Cross-Cloud vMotion

In HCX No-Downtime method, running VM’s are migrated from on-premise to cloud datacenter with absolutely no downtime. This migration is very similar to native vSphere vMotion migration.

If you are like me, you may be thinking that vMotion is performed between 2 hosts in same cluster or across cluster, but with HCX we have on-prem and cloud and there is no connection between Esxi hosts of on-prem and Esxi hosts running in cloud, then how are we able to vMotion to cloud?Read More

Cleanup HCX Deployment

In this post we will learn about how to do cleanup of a HCX deployment in a right way. 

Below are the high level steps for HCX cleanup.

1: Unstretch a Layer 2 Network : Unstretching a Layer 2 network is necessary before removing the associated L2C appliance. Steps given below

  • Login to vCenter Web Client and click on HCX plugin and navigate to Interconnect > Extended Networks tab.
  • Select the network you want to remove and click the X button and hit OK to confirm.

2: Delete the L2C virtual appliance: To delete the L2C appliance, go to interconnects > HCX Components > Network Extension Service tab and select the appliance and click on Remove. 

l2c-remove.PNG

Click on Yes to confirm deletion of appliance.

l2c-remove2.PNG

You will see a message for appliance removal has started. Be patient, it takes 2-3 minutes to remove appliance from both on-prem and cloud side.

l2c-remove3.PNG

3: Delete Cloud Gateway appliance: From the HCX Components tab, select the CGW appliance and click on Remove.Read More