Deploying NSX-T Based Workload Domain in VMware Cloud Foundation

In this post I will walk through steps of deploying a VI workload domain based on NSX-T. Note: We can only deploy VI domain with NSX-T. As of now Management workload domain is only NSX-V based.

Before kicking NSX-T based VI workload domain, please ensure you have met following prerequisites:

1: NSX-T license has been added to SDDC-Manager under Administration > Licensing

2: NSX-T install bundle have been downloaded from repository. Below is screenshot of a downloaded bundle.

nsx-t-bundle.PNG

3: Network Pool have been created for the workload domain. This pool will have IP address for the vMotion & vSAN network.

wld-nsxt01

4: Esxi hosts that will take part in workload domain have been configured and commissioned. If you are new to vCF and don’t know the host commision process, then please refer to my earlier post for the steps.

Once we have met the above prerequisites, we are all set to kick new VI workload domain.… Read More

vCloud Availability for vCloud Director: Part 7-Deploy vSphere Replication Manager

In last post of this series we deployed RabbitMQ and integrated it with vCD.

In this post we will deploy and configure vSphere Replication Manager aka VRMS. But before we go ahead and kick the VRMS deployment, lets discuss in brief about what is the role of VRMS in a VCAV stack.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: vCloud Availability Introduction

2: vCloud Availability Architecture & Components

3: VCAV Deployment

4: Install Cloud Proxy for vCD

5: Deploy Cassandra Cluster

6: RabbitMQ Cluster Deployment and vCD Integration

vSphere Replication Manager manages and monitors the replication process from tenant VMs to the cloud provider environment. A vSphere Replication management service runs for each vCenter Server and tracks changes to VMs and infrastructure related to replication.

VRMS when deployed is integrated with the resource vCenter Server which is in turn is registered to vCloud Director and made available to tenants.Read More

Locating HCX System ID

An HCX System ID is needed when you are working with the VMware support team regarding any HCX issues. 

The HCX system ID can be found via the CLI as well as the GUI. We will discuss both methods in this post.

CLI Method (You can only find on-prem HCX system ID using this method)

Connect to the on-prem HCX ENT appliance via console or SSH using admin credentials and run the command: cat /common/location

hcx-sysid0.PNG

GUI Method

Login to vSphere Web Client, click on the HCX plugin, and navigate to Administration > System Updates

Under the Info column, click on the ‘i’ icon, and it will show you the system ID, which you can copy to your clipboard. 

hcx-sysid.PNG

Do the same to obtain the remote HCX system ID.Read More

Learning HCX-Part 11: Testing DR With HCX

In last post of this series we performed a reverse migration and brought a VM back to on-prem from cloud. Now we have tested all migration method and have basic understanding of how they work, lets move forward to explore Disaster Recovery capabilities provided by HCX.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

9: Testing HCX Bulk Migration

10: HCX Reverse Migration

About HCX Disaster Recovery

HCX Disaster Recovery is a service intended to protect virtual workloads managed by VMware vSphere that are either deployed in a private or a public cloud.

HCX DR offers following benefits:

  • Simple and easy to use management platform that allows secure (enterprise to cloud and cloud to cloud) asynchronous replication and recovery of virtual machines.
Read More

Learning HCX-Part 10: HCX Reverse Migration

In last post of this series we learnt about Bulk Migration feature of HCX. In this post we will learn about HCX Reverse Migration.

Reverse migration provides you the ability to migrate VMs back from your cloud infrastructure to your on-premises environment using the HCX migration methods (No downtime/Cold migration/Bulk migration).

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

9: Testing HCX Bulk Migration

Reverse migration use cases

There are 2 or 3 use cases which I can think as of now for Reverse migration.

1: You have transferred a VM to cloud and later found that vm is not suitable to run in cloud environment and you are facing serious performance issues.Read More

Learning HCX-Part 9: Testing HCX Bulk Migration

In last post we tested the Cross-Cloud vMotion feature of HCX. In this post we will be testing the bulk migration feature.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

8: Testing HCX Cross Cloud Migration

To trigger bulk migration feature, click on Migrate Virtual machines option.

bulk-migration1

Fill in the default values for container selection etc and select migration type as bulk migration.

bulk-migration2

Click on (Schedule Failover) to specify the switchover time. 

Select the start and end time.

bulk-migration3

Select the VM’s which will be migrated to cloud. You can change migration type for individual VM’s. For e.g you can choose few VM’s for bulk migration and few for vMotion or Cold migration. Read More

Learning HCX-Part 8: Testing HCX Cross Cloud Migration

In last post of this series we discussed about various migration methods that are available with HCX. In this post we will be testing the cross-cloud migration method in lab. 

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

6: Deploying Fleet Appliances

7: HCX Migration Methods

To configure cross-cloud migration for VM’s, login to vCenter Web Client and click on HCX plugin from home page and go to Migration tab. Click on Migrate Virtual machines.

hcx-migration1

The selections made default options are automatically applied on all VM’s selected for migration. For e.g migrated vm’s will be sitting in which compute cluster/datastore/folder etc. You can override this at per vm level as well. 

There are various other options available as well such as Upgrade virtual hardware, Upgrade VMware tools etc.Read More

Cleanup HCX Deployment

In this post we will learn about how to do cleanup of a HCX deployment in a right way. 

Below are the high level steps for HCX cleanup.

1: Unstretch a Layer 2 Network : Unstretching a Layer 2 network is necessary before removing the associated L2C appliance. Steps given below

  • Login to vCenter Web Client and click on HCX plugin and navigate to Interconnect > Extended Networks tab.
  • Select the network you want to remove and click the X button and hit OK to confirm.

2: Delete the L2C virtual appliance: To delete the L2C appliance, go to interconnects > HCX Components > Network Extension Service tab and select the appliance and click on Remove. 

l2c-remove.PNG

Click on Yes to confirm deletion of appliance.

l2c-remove2.PNG

You will see a message for appliance removal has started. Be patient, it takes 2-3 minutes to remove appliance from both on-prem and cloud side.

l2c-remove3.PNG

3: Delete Cloud Gateway appliance: From the HCX Components tab, select the CGW appliance and click on Remove.Read More

Learning HCX-Part 6: Deploying Fleet Appliances

In last post of this series we did the fleet configuration so that we can deploy the fleet appliances. In this post we will discuss about the fleet appliances and will deploy them.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

5: Configuring Interconnect Networks

There are 3 appliances that you can deploy in your HCX environment:

Cloud Gateway (CGW): The CGW appliance is responsible for creating encrypted tunnel between on-premise and HCX-Cloud for vMotion and Replication traffic. CGW deployment is kicked from on-prem and its deployed as virtual machine in both on-prem and cloud side.

CGW constitutes the migration path for vMotion and replication traffic and it is done via establishing a secure connection between the 2 CGW vm’s deployed in on-prem and cloud side respectively.Read More

Learning HCX-Part 5: Configuring Interconnect Networks

In last post of this series we paired HCX Enterprise with the HCX Cloud appliance. Now the next task is to deploy the fleet appliances, but before doing any deployment we have to configure the networks for interconnects i.e fleet configuration.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction to HCX

2: HCX Enterprise Deployment & Configuration

3: HCX Cloud Deployment & Configuration

4: HCX Site Pairing

Basically we are defining a pool of IP’s which interconnect appliances will use when we start deploying them. High level steps of fleet config are summarized as below.

Login to hcx cloud using the public url (https://hcx-cloud-public-fqdn) and navigate to Administration > Deployment Containers.

Deployment containers dictates where your fleet appliances will be sitting post deployment. Click on Add button to specify a new container.

hcxc-1

Provide a name for the container and select the vCenter server with which your HCX-Cloud appliance is registered.Read More