Building a VMware Cloud Foundation Lab: Part 3 – Esxi Host Deployment & Configuration

In last Post of this series, I talked about the DNS records and IP Pools that should be in place for a successful vCF deployment.

In this post I will walk through steps needed to create nested Esxi and post installation steps.

Before we plan to create nested Esxi hosts or physical, we need to identify the build/version of Esxi and other components that are compatible with a given vCF version. VMware KB-52520 help you identify this.

For vCF 3.7 please refer to below table for build number needed. 

vcf-build-corelation.JPG

Once I had build info handy, I deployed 4 nested ESXi hosts for the management workload domain. These VMs were created with following specifications:

  • 10 vCPUs
  • 64 GB Memory
  • 4 Hard disks (Thin Provisioned): 16 GB (Boot disk0), 100 GB (vSAN cache tier), 150 GB, 150 GB (vSAN capacity tier). 
  • 2 VMXNET3 NICs connected to the same Portgroup
  • Virtual hardware version 14 (compatibility: Esxi 6.7 or later during vm creation)
  • Guest OS: Other
  • Guest OS version: VMware ESXi 6.5 or later
  • Expose hardware assisted virtualization to the guest OS enabled
  • EFI firmware

Esxi Host Patching

I initially installed hosts with Esxi 6.7 U1 Build 10302608 (as i had this iso handy) and then patched it to Esxi 6.7 EP 09 Build 3644319.Read More

Building a VMware Cloud Foundation Lab: Part 2 – DNS and IP Pools

When you are planning for a vCF deployment, you need lot and lot of IP’s and DNS records. 

In my environment I have an AD integrated DNS running on MS 2012 R2.

If you are only planning on deploying the Management Workload Domain in your environment you only need to create the forward and reverse lookup records for Management Workload Domain. If Virtual Infrastructure Workload Domain will be introduced in future then you need to plan the DNS records accordingly.

Note: Please see this article for comprehensive list of DNS requirement for vCF deployment.

Below is the list of DNS records that I created in my environment:

Workload Domain Hostname IP Address
Management vcfesx01 172.20.31.101
Management vcfesx02 172.20.31.102
Management vcfesx03 172.20.31.103
Management vcfesx04 172.20.31.104
Management vcf-psc01 172.20.31.105
Management vcf-psc02 172.20.31.106
Management vcf-mgmtvc 172.20.31.107
Management vcf-mgmtnsx 172.20.31.108
Management vcf-sddcmgr 172.20.31.109
Management vcfvrli (iLB) 172.20.31.110
Management vcf-vrli01 172.20.31.111
Management vcf-vrli02 172.20.31.112
Management vcf-vrli03 172.20.31.113
Virtual Infrastructure wld-esxi01 172.20.31.165
Virtual Infrastructure wld-esxi02 172.20.31.166
Virtual Infrastructure wld-esxi03 172.20.31.167
Virtual Infrastructure vcf-wldvc01 172.20.31.168
Virtual Infrastructure vcf-wldnsx01 172.20.31.169
NA vcf (cloud builder appliance) 172.20.31.100
NA vcf-lcm 172.20.31.118

Note: If you are planning to deploy vRealize and Horizon infrastructure using vCF, you need to create additional records as per product DNS requirement. Read More

Building a VMware Cloud Foundation Lab: Part 1 – Infra Preparation

Recently I got chance to do a nested vCF 3.5/3.7 deployment in my lab and it was a great learning. Few friends of mine reached out to me to know more about VMware Cloud Foundation product as a whole and how we can get our hands dirty on it. 

Through this series of articles, I want to share my experience with you on how to do a successful vCF 3.7 deployment in a nested environment.  

What is VMware Cloud Foundation (vCF)?

As per VMware official documentation

VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud.

vCF helps you to deploy a true SDDC environment in your infrastructure by following the VMware Validated Design recommendations and makes the life cycle management of SDDC very easy.Read More

NSX Guest Introspection: Components & Configuration

What is NSX Guest  Introspection ?

VMware NSX Guest Introspection is a security feature which when enabled, offloads antivirus and anti-malware agent processing to a dedicated virtual appliance (service vm’s). 

When Guest Introspection is enabled on a cluster, it continuously update antivirus signatures, thus giving uninterrupted protection to the virtual machines running in that cluster. New virtual machines that are created (or existing virtual machines that went offline) are immediately protected with the most current antivirus signatures when they come online.

Components of NSX Guest Introspection

The main components of Guest Introspection are: 

1: Guest VM Thin Agent: This is installed as part of the VMware Tools driver. It intercepts Guest VM file/OS events and passes them to ESXi Host.

2: MUX Module: When Guest Introspection is installed on a cluster, NSX installs a new VIB (epsec-mux) on each host of that cluster. The new VIB is responsible for receiving messages from the Thin Agent running in the guest VM’s and passing the information to the Service Virtual Machine via a TCP session.Read More

Deleting Stubborn Interconnect Configuration in HCX

I had a working HCX setup in my lab, and I was making some modifications to my setup and tried chopping off my interconnect networking configuration on the HCX Cloud side. Deletion of the interconnect configuration was failing for me with the below error

hcx-pool-delete error.JPG

Let me first explain how I landed in this situation. 

I deleted the interconnect appliances from my on-prem to show a demo to my peers on how the interconnects are deployed via the HCX plugin in the vSphere webclient. During the demo, I did not notice that the site pairing between my on-prem HCX and cloud side HCX was broken (due to a vCenter upgrade on the cloud side, a cert mismatch issue occurred).

When I kicked the CGW appliance removal, the backing VM got deleted, and the appliance configuration disappeared from on-prem. But when I checked on the cloud side, the peer CGW appliance and the Mobility Agent host were still intact.Read More

Creating HCX Multi Site Service-Mesh for Hybrid Mobility

This is in continuation with my last post where I discussed about what is service mesh feature of HCX and how it works. In this post we will learn how to create service mesh.

As we discussed earlier that we need to have compute/network profiles created on both on-prem and cloud side.

The compute profile describes the infrastructure at the source and destination site and provides the placement details (Resource Pool, Datastore) where the virtual appliances should be placed during deployment and the networks to which they should connect.

Login to HCX cloud appliance using your vSphere credentials (https://HCX-FQDN) and navigate to Multi-Site Service Mesh tab and click on create compute profile. 

hcx-mesh-1

Create compute profile page gives you a fair idea on what a compute profile comprises of.

Provide a name for your profile and hit continue. 

hcx-mesh-2

Select the HCX services to be enabled. I selected all services in my case.Read More

What is HCX Multi-Site Services Mesh

Recently I upgraded HCX appliances in my lab and saw a new tab named “Multi Site Services Mesh” appearing in both cloud side and enterprise side UI and was curious to know about this new feature.

What is HCX Multi Site Services Mesh?

As we know that in order to start consuming HCX, we need to have the interconnect appliances (CGW, L2C and WAN Opt) deployed in both on-prem and cloud side. Before starting the deployment of appliances, we should have the Interconnect configuration already in place in cloud side.

The Multi-Site Service Mesh enables the configuration, deployment, and serviceability of Interconnect virtual appliance pairs with ease. Now you have the choice to deploy/manage HCX services with the traditional Interconnect interface or with the new Multi-Site Service Mesh. To deploy the HCX IX’s you will choose either of the method. 

Before you plan to use HCX Multi-Site Service Mesh, let’s have a look at few benefits which we get out of this feature: 

  • Uniformity: the same configuration patterns at the source and remote sites.
Read More

Managing HCX Migration via Powershell

HCX supports 3 methods for migrating VMs to the cloud:

  • Cold Migration
  • HCX Cross-Cloud vMotion
  • HCX Bulk Migration

To know more about these migration methods, please read this post to know more about the migration methods. 

HCX migrations can be scheduled from the HCX UI using the vSphere Client or automated using the HCX API. In the last post of this series, I demonstrated a few PowerCli commands that we can use for the HCX system. 

API/PowerCli is an obvious choice when you think of automation. Using automation not only helps in reducing the amount of user input required in the UI but also reduces the chances of human errors.

In this post, I will show the use of HCX PowerCLI cmdlets, which you can use to automate HCX migration.

The cmdlet New-HCXMigration creates an HCX (Hybrid Cloud Extension) migration request. 

Step 1: First, we have to identify the parameters that we need to pass with the cmdlet.Read More

Getting Started With HCX PowerCli Module

With the release of PowerCli 11.2, support for many new VMware products was introduced, including VMware HCX. The PowerCli module name for HCX is “VMware.VimAutomation.HCX” and it currently has 20 cmdlets to manage HCX.

You can use Windows Power Shell to install/upgrade your PowerCLI to v11.2 using the below commands:

1: Once the necessary module is installed, we can use the Get-Command to examine the cmdlets that are available for HCX.

2: Authenticate with HCX: To connect to the HCX Manager, we need to use the Connect-HCXServer cmdlet.

Read More

Exploring HCX API

VMware Hybrid Cloud Extension is a powerful product for data center migration, replacement, and disaster recovery. VMware HCX supports 3 major clouds at the moment: VMware Cloud on AWS, OVH Cloud, and IBM Cloud.

Although the HCX interface for workload migration is very simple and even first-time users can migrate workloads without much difficulty, it is always good to know about the API offerings of any product so that you can automate it via scripting.

The HCX API allows customers to automate all aspects of HCX, including the HCX VAMI UI for initial configuration as well as consuming the HCX services that are exposed in the vSphere UI.

HCX has its own API explorer (similar to the vSphere swagger interface). You can use additional tools like Postman or Curl to explore the capabilities of the HCX API.

Method 1: Using HCX API Interface

The HCX API interface can be accessed by typing https://<hcx-manager–fqdn>/hybridity/docs/index.html,Read More