Building a VMware Cloud Foundation Lab: Part 2 – DNS and IP Pools

When you are planning for a vCF deployment, you need lot and lot of IP’s and DNS records. 

In my environment I have an AD integrated DNS running on MS 2012 R2.

If you are only planning on deploying the Management Workload Domain in your environment you only need to create the forward and reverse lookup records for Management Workload Domain. If Virtual Infrastructure Workload Domain will be introduced in future then you need to plan the DNS records accordingly.

Note: Please see this article for comprehensive list of DNS requirement for vCF deployment.

Below is the list of DNS records that I created in my environment:

Workload Domain Hostname IP Address
Management vcfesx01 172.20.31.101
Management vcfesx02 172.20.31.102
Management vcfesx03 172.20.31.103
Management vcfesx04 172.20.31.104
Management vcf-psc01 172.20.31.105
Management vcf-psc02 172.20.31.106
Management vcf-mgmtvc 172.20.31.107
Management vcf-mgmtnsx 172.20.31.108
Management vcf-sddcmgr 172.20.31.109
Management vcfvrli (iLB) 172.20.31.110
Management vcf-vrli01 172.20.31.111
Management vcf-vrli02 172.20.31.112
Management vcf-vrli03 172.20.31.113
Virtual Infrastructure wld-esxi01 172.20.31.165
Virtual Infrastructure wld-esxi02 172.20.31.166
Virtual Infrastructure wld-esxi03 172.20.31.167
Virtual Infrastructure vcf-wldvc01 172.20.31.168
Virtual Infrastructure vcf-wldnsx01 172.20.31.169
NA vcf (cloud builder appliance) 172.20.31.100
NA vcf-lcm 172.20.31.118

Note: If you are planning to deploy vRealize and Horizon infrastructure using vCF, you need to create additional records as per product DNS requirement. Read the rest

Building a VMware Cloud Foundation Lab: Part 1 – Infra Preparation

Recently I got chance to do a nested vCF 3.5/3.7 deployment in my lab and it was a great learning. Few friends of mine reached out to me to know more about VMware Cloud Foundation product as a whole and how we can get our hands dirty on it. 

Through this series of articles, I want to share my experience with you on how to do a successful vCF 3.7 deployment in a nested environment.  

What is VMware Cloud Foundation (vCF)?

As per VMware official documentation

VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud.

vCF helps you to deploy a true SDDC environment in your infrastructure by following the VMware Validated Design recommendations and makes the life cycle management of SDDC very easy.Read the rest

NSX Guest Introspection: Components & Configuration

What is NSX Guest  Introspection ?

VMware NSX Guest Introspection is a security feature which when enabled, offloads antivirus and anti-malware agent processing to a dedicated virtual appliance (service vm’s). 

When Guest Introspection is enabled on a cluster, it continuously update antivirus signatures, thus giving uninterrupted protection to the virtual machines running in that cluster. New virtual machines that are created (or existing virtual machines that went offline) are immediately protected with the most current antivirus signatures when they come online.

Components of NSX Guest Introspection

The main components of Guest Introspection are: 

1: Guest VM Thin Agent: This is installed as part of the VMware Tools driver. It intercepts Guest VM file/OS events and passes them to ESXi Host.

2: MUX Module: When Guest Introspection is installed on a cluster, NSX installs a new VIB (epsec-mux) on each host of that cluster. The new VIB is responsible for receiving messages from the Thin Agent running in the guest VM’s and passing the information to the Service Virtual Machine via a TCP session.Read the rest

Deleting Stubborn Interconnect Configuration in HCX

I had a working HCX setup in my lab, and I was making some modifications to my setup and tried chopping off my interconnect networking configuration on the HCX Cloud side. Deletion of the interconnect configuration was failing for me with the below error

hcx-pool-delete error.JPG

Let me first explain how I landed in this situation. 

I deleted the interconnect appliances from my on-prem to show a demo to my peers on how the interconnects are deployed via the HCX plugin in the vSphere webclient. During the demo, I did not notice that the site pairing between my on-prem HCX and cloud side HCX was broken (due to a vCenter upgrade on the cloud side, a cert mismatch issue occurred).

When I kicked the CGW appliance removal, the backing VM got deleted, and the appliance configuration disappeared from on-prem. But when I checked on the cloud side, the peer CGW appliance and the Mobility Agent host were still intact.Read the rest

Creating HCX Multi Site Service-Mesh for Hybrid Mobility

This is in continuation with my last post where I discussed about what is service mesh feature of HCX and how it works. In this post we will learn how to create service mesh.

As we discussed earlier that we need to have compute/network profiles created on both on-prem and cloud side.

The compute profile describes the infrastructure at the source and destination site and provides the placement details (Resource Pool, Datastore) where the virtual appliances should be placed during deployment and the networks to which they should connect.

Login to HCX cloud appliance using your vSphere credentials (https://HCX-FQDN) and navigate to Multi-Site Service Mesh tab and click on create compute profile. 

hcx-mesh-1

Create compute profile page gives you a fair idea on what a compute profile comprises of.

Provide a name for your profile and hit continue. 

hcx-mesh-2

Select the HCX services to be enabled. I selected all services in my case.Read the rest

What is HCX Multi-Site Service Mesh

Recently, I upgraded HCX appliances in my lab and saw a new tab named “Multi Site Service Mesh” appearing in both the cloud-side and on-prem side UIs, and I was curious to know about this new feature.

What is HCX Multi-Site Service Mesh?

To start consuming HCX, we need to have the interconnect appliances (CGW, L2C, and WAN Opt) deployed on both the on-prem and cloud sides. Before initiating the appliance deployment, we should have the interconnect configuration already in place on the cloud side.

The Multi-Site Service Mesh enables the configuration, deployment, and serviceability of Interconnect virtual appliance pairs with ease. Now you have the choice to deploy/manage HCX services with the traditional Interconnect interface or with the new Multi-Site Service Mesh. To deploy the HCX IXs, you will choose either of the methods. 

Before you plan to use HCX Multi-Site Service Mesh, let’s have a look at a few benefits that we get out of this feature:

  • Uniformity: the same configuration patterns at the source and remote sites.
Read the rest

Managing HCX Migration via Powershell

HCX supports 3 methods for migrating VMs to the cloud:

  • Cold Migration
  • HCX Cross-Cloud vMotion
  • HCX Bulk Migration

To know more about these migration methods, please read this post to know more about the migration methods. 

HCX migrations can be scheduled from the HCX UI using the vSphere Client or automated using the HCX API. In the last post of this series, I demonstrated a few PowerCli commands that we can use for the HCX system. 

API/PowerCli is an obvious choice when you think of automation. Using automation not only helps in reducing the amount of user input required in the UI but also reduces the chances of human errors.

In this post, I will show the use of HCX PowerCLI cmdlets, which you can use to automate HCX migration.

The cmdlet New-HCXMigration creates an HCX (Hybrid Cloud Extension) migration request. 

Step 1: First, we have to identify the parameters that we need to pass with the cmdlet.Read the rest

Getting Started With HCX PowerCli Module

With the release of PowerCli 11.2, support for many new VMware products was introduced, including VMware HCX. The PowerCli module name for HCX is “VMware.VimAutomation.HCX” and it currently has 20 cmdlets to manage HCX.

You can use Windows Power Shell to install/upgrade your PowerCLI to v11.2 using the below commands:

1: Once the necessary module is installed, we can use the Get-Command to examine the cmdlets that are available for HCX.

2: Authenticate with HCX: To connect to the HCX Manager, we need to use the Connect-HCXServer cmdlet.

Read the rest

Exploring HCX API

VMware Hybrid Cloud Extension is a powerful product for data center migration, replacement, and disaster recovery. VMware HCX supports 3 major clouds at the moment: VMware Cloud on AWS, OVH Cloud, and IBM Cloud.

Although the HCX interface for workload migration is very simple and even first-time users can migrate workloads without much difficulty, it is always good to know about the API offerings of any product so that you can automate it via scripting.

The HCX API allows customers to automate all aspects of HCX, including the HCX VAMI UI for initial configuration as well as consuming the HCX services that are exposed in the vSphere UI.

HCX has its own API explorer (similar to the vSphere swagger interface). You can use additional tools like Postman or Curl to explore the capabilities of the HCX API.

Method 1: Using HCX API Interface

The HCX API interface can be accessed by typing https://<hcx-manager–fqdn>/hybridity/docs/index.html,Read the rest

Upgrading Clustered vRLI Deployment

In this post I will walk through steps of upgrading a clustered vRLI deployment. Before preparing for upgrade, make sure to read VMware documentation for the supported upgrade path.

One very important consideration before you start upgrading vRLI:

Upgrading vRealize Log Insight must be done from the master node’s FQDN. Upgrading using the Integrated Load Balancer IP address is not supported.

To start vRLI upgrade, login to the web interface of master node and navigate to Administration > Cluster and click on Upgrade Cluster button.

Note: In my lab I am upgrading vRLI from 4.5 to 4.6. Upgrade bundle for this version is available here

vrli-upgrade01.JPG

Hit Upgrade button to start upgrade.

Note: Make sure you have taken snapshots of the master/worker nodes before starting the upgrade.

vrli-upgrade02

Upload the .pak upgrade bundle file. 

vrli-upgrade03

On accepting EULA, upgrade process starts. 

vrli-upgrade04

vrli-upgrade05

Wait for 5-7 minutes for upgrade to complete. 

vrli-upgrade06

Upgrade is performed in a rolling fashion.Read the rest