Building a VMware Cloud Foundation Lab: Part 1 – Infra Preparation

Recently I got chance to do a nested vCF 3.5/3.7 deployment in my lab and it was a great learning. Few friends of mine reached out to me to know more about VMware Cloud Foundation product as a whole and how we can get our hands dirty on it. 

Through this series of articles, I want to share my experience with you on how to do a successful vCF 3.7 deployment in a nested environment.  

What is VMware Cloud Foundation (vCF)?

As per VMware official documentation

VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud.

vCF helps you to deploy a true SDDC environment in your infrastructure by following the VMware Validated Design recommendations and makes the life cycle management of SDDC very easy.Read More

NSX Guest Introspection: Components & Configuration

What is NSX Guest  Introspection ?

VMware NSX Guest Introspection is a security feature which when enabled, offloads antivirus and anti-malware agent processing to a dedicated virtual appliance (service vm’s). 

When Guest Introspection is enabled on a cluster, it continuously update antivirus signatures, thus giving uninterrupted protection to the virtual machines running in that cluster. New virtual machines that are created (or existing virtual machines that went offline) are immediately protected with the most current antivirus signatures when they come online.

Components of NSX Guest Introspection

The main components of Guest Introspection are: 

1: Guest VM Thin Agent: This is installed as part of the VMware Tools driver. It intercepts Guest VM file/OS events and passes them to ESXi Host.

2: MUX Module: When Guest Introspection is installed on a cluster, NSX installs a new VIB (epsec-mux) on each host of that cluster. The new VIB is responsible for receiving messages from the Thin Agent running in the guest VM’s and passing the information to the Service Virtual Machine via a TCP session.Read More

Deleting Stubborn Interconnect Configuration in HCX

I had a working HCX setup in my lab, and I was making some modifications to my setup and tried chopping off my interconnect networking configuration on the HCX Cloud side. Deletion of the interconnect configuration was failing for me with the below error

hcx-pool-delete error.JPG

Let me first explain how I landed in this situation. 

I deleted the interconnect appliances from my on-prem to show a demo to my peers on how the interconnects are deployed via the HCX plugin in the vSphere webclient. During the demo, I did not notice that the site pairing between my on-prem HCX and cloud side HCX was broken (due to a vCenter upgrade on the cloud side, a cert mismatch issue occurred).

When I kicked the CGW appliance removal, the backing VM got deleted, and the appliance configuration disappeared from on-prem. But when I checked on the cloud side, the peer CGW appliance and the Mobility Agent host were still intact.Read More

Creating HCX Multi Site Service-Mesh for Hybrid Mobility

This is in continuation with my last post where I discussed about what is service mesh feature of HCX and how it works. In this post we will learn how to create service mesh.

As we discussed earlier that we need to have compute/network profiles created on both on-prem and cloud side.

The compute profile describes the infrastructure at the source and destination site and provides the placement details (Resource Pool, Datastore) where the virtual appliances should be placed during deployment and the networks to which they should connect.

Login to HCX cloud appliance using your vSphere credentials (https://HCX-FQDN) and navigate to Multi-Site Service Mesh tab and click on create compute profile. 

hcx-mesh-1

Create compute profile page gives you a fair idea on what a compute profile comprises of.

Provide a name for your profile and hit continue. 

hcx-mesh-2

Select the HCX services to be enabled. I selected all services in my case.Read More

What is HCX Multi-Site Services Mesh

Recently I upgraded HCX appliances in my lab and saw a new tab named “Multi Site Services Mesh” appearing in both cloud side and enterprise side UI and was curious to know about this new feature.

What is HCX Multi Site Services Mesh?

As we know that in order to start consuming HCX, we need to have the interconnect appliances (CGW, L2C and WAN Opt) deployed in both on-prem and cloud side. Before starting the deployment of appliances, we should have the Interconnect configuration already in place in cloud side.

The Multi-Site Service Mesh enables the configuration, deployment, and serviceability of Interconnect virtual appliance pairs with ease. Now you have the choice to deploy/manage HCX services with the traditional Interconnect interface or with the new Multi-Site Service Mesh. To deploy the HCX IX’s you will choose either of the method. 

Before you plan to use HCX Multi-Site Service Mesh, let’s have a look at few benefits which we get out of this feature: 

  • Uniformity: the same configuration patterns at the source and remote sites.
Read More

Managing HCX Migration via Powershell

HCX supports 3 methods for migrating VMs to the cloud:

  • Cold Migration
  • HCX Cross-Cloud vMotion
  • HCX Bulk Migration

To know more about these migration methods, please read this post to know more about the migration methods. 

HCX migrations can be scheduled from the HCX UI using the vSphere Client or automated using the HCX API. In the last post of this series, I demonstrated a few PowerCli commands that we can use for the HCX system. 

API/PowerCli is an obvious choice when you think of automation. Using automation not only helps in reducing the amount of user input required in the UI but also reduces the chances of human errors.

In this post, I will show the use of HCX PowerCLI cmdlets, which you can use to automate HCX migration.

The cmdlet New-HCXMigration creates an HCX (Hybrid Cloud Extension) migration request. 

Step 1: First, we have to identify the parameters that we need to pass with the cmdlet.Read More

Getting Started With HCX PowerCli Module

With the release of PowerCli 11.2, support for many new VMware products was introduced, including VMware HCX. The PowerCli module name for HCX is “VMware.VimAutomation.HCX” and it currently has 20 cmdlets to manage HCX.

You can use Windows Power Shell to install/upgrade your PowerCLI to v11.2 using the below commands:

1: Once the necessary module is installed, we can use the Get-Command to examine the cmdlets that are available for HCX.

2: Authenticate with HCX: To connect to the HCX Manager, we need to use the Connect-HCXServer cmdlet.

Read More

Exploring HCX API

VMware Hybrid Cloud Extension is a powerful product for data center migration, replacement, and disaster recovery. VMware HCX supports 3 major clouds at the moment: VMware Cloud on AWS, OVH Cloud, and IBM Cloud.

Although the HCX interface for workload migration is very simple and even first-time users can migrate workloads without much difficulty, it is always good to know about the API offerings of any product so that you can automate it via scripting.

The HCX API allows customers to automate all aspects of HCX, including the HCX VAMI UI for initial configuration as well as consuming the HCX services that are exposed in the vSphere UI.

HCX has its own API explorer (similar to the vSphere swagger interface). You can use additional tools like Postman or Curl to explore the capabilities of the HCX API.

Method 1: Using HCX API Interface

The HCX API interface can be accessed by typing https://<hcx-manager–fqdn>/hybridity/docs/index.html,Read More

Upgrading Clustered vRLI Deployment

In this post I will walk through steps of upgrading a clustered vRLI deployment. Before preparing for upgrade, make sure to read VMware documentation for the supported upgrade path.

One very important consideration before you start upgrading vRLI:

Upgrading vRealize Log Insight must be done from the master node’s FQDN. Upgrading using the Integrated Load Balancer IP address is not supported.

To start vRLI upgrade, login to the web interface of master node and navigate to Administration > Cluster and click on Upgrade Cluster button.

Note: In my lab I am upgrading vRLI from 4.5 to 4.6. Upgrade bundle for this version is available here

vrli-upgrade01.JPG

Hit Upgrade button to start upgrade.

Note: Make sure you have taken snapshots of the master/worker nodes before starting the upgrade.

vrli-upgrade02

Upload the .pak upgrade bundle file. 

vrli-upgrade03

On accepting EULA, upgrade process starts. 

vrli-upgrade04

vrli-upgrade05

Wait for 5-7 minutes for upgrade to complete. 

vrli-upgrade06

Upgrade is performed in a rolling fashion.Read More

Configuring AD Authentication in vRealize Log Insight

vRealize Log Insight supports 3 Authentication methods:

  • Local authentication.
  • VMware Identity Manager authentication.
  • Active Directory authentication.

You can use more than one method in the same deployment and users then select the type of authentication to use at log in.

To AD authentication to vRLI, login to web interface and navigate to Administration > Authentication page

vrli-auth01

Switch to Active Directory tab and toggle the “Enable Active Directory support” button.

vrli-auth02

Specify your domain related details and hit Test Connection button to test whether vRLI is able to talk to AD or not. Hit Save button if test is successful. 

vrli-auth03

Now we need to specify the users/groups who should have access to vRLI. To do so navigate to Access Control tab and select “Users and Groups” and click on New User to add an AD account.

vrli-auth04

Select AD as authentication method and specify the domain name and the user who should have access to vRLI.Read More