Whats New in HCX 4.0

VMware HCX 4.0 is all set to be released today. HCX 4.0 is a major release that introduces new functionality and enhancements.

In this post, I will be explaining these feature additions. The new feature sets can be widely categorized as follows:

  • Migration Enhancements
  • Network Extension Enhancements
  • Interconnect Enhancements

Let’s discuss this one by one.

Migration Enhancements

1: Migration Event Details: When VM migration is in progress, the HCX migration platform captures high-level status for the major phases like transfer, continuous replication, and switchover, and the status reported in the UI is always shown as “Transfer in Progress,”  “Switchover in Progress,” etc.

The actual details about “what state the migration is in right now” and “how long it has been in that state” are hidden from the end user. Also, the platform does not report the exact reason if the migration fails or why the migration is taking a long time to move to the next state.Read More

Layer 2 Bridging With NSX-T

In this post, I will be talking about the Layer 2 Bridging functionality of NSX-T and discuss use cases and design considerations when planning to implement this feature. Let’s get started.

NSX-T supports L2 bridging between Overlay logical segments and VLAN-backed networks. When workloads connected to the NSX-T overlay segment require L2 connectivity to either VLAN-connected workloads or need to reach a physical device (such as a physical GW, LB, or Firewall), NSX-T Layer 2 bridge can be leveraged. 

Use Cases of Layer 2 Bridging

A few of the use cases that come to the top of my mind are: 

1: Migrating workloads connected to VLAN port groups to NSX-T overlay segments: Customers who are looking for migrating workloads from legacy vSphere infrastructure to SDDC can leverage Layer 2 Bridging to seamlessly move their workloads.

When planning for migrations, some of the challenges associated with migrations are Re-IP of workloads, migrating firewall rules, NAT rules, etc.Read More

What’s New in VMware Cloud Foundation 4.2

VMware Cloud Foundation 4.2 will be out soon and like every other release, 4.2 is coming up with exciting new features. In this post, I will be explaining a few of those. So let’s get started. 

1: Static IP Pool for NSX-T TEPs: This one is probably one of the most awaited features of VMware Cloud Foundation. VCF 4.2 allows you to leverage static IP pools for NSX-T Host Overlay (TEP) networks as an alternative to DHCP. Now you no longer need to maintain additional infrastructure items (DHCP Server).  Both management domain and VI workload domains can now make use of static IPs.

In the VCF configuration workbook, you will now see an additional section where you can specify the IP range for Host TEP.

2: Release Versions UI: A new tab (Available Versions) has been added in the SDDC Manager UI which shows the information on the Bill Of Materials, new features, and end of general support dates for each available VCF release.Read More

Configure VCD in HCX via API

In my last post, I documented the HCX installation workflow for VCD based clouds. In this post, I am going to show how to do the same via API. 

Once HCX Cloud Manager has been deployed and boots up, you can make use of the below API to integrate VCD into HCX. 

1: Import VCD Certificate

Response Output

2: Configure VCD

Read More

HCX Integration With VMware Cloud Director 10x

This blog post provides an overview of the HCX installation workflow for VMware Cloud Director based Clouds.

The below diagram taken from VMware official docs shows the high-level architecture of HCX architecture for VCD based clouds.

HCX Cloud System & Network Requirements

Before starting HCX Cloud installation, please ensure that you’ve met all the System and Network Port/Protocol requirements. These are documented Here

Firewall Requirements

  • The site’s WAN firewall will need to allow inbound HTTPS connections destined for the HCX Cloud. HCX Cloud will make outbound HTTPS requests.
  • The HCX Cloud site firewall also needs to allow inbound UDP-500 and UDP-4500 connections destined for the HCX appliances.
  • All other flows allow HCX to integrate with VMware SDDC components, typically these are not firewalled within the datacenter

The below diagram shows various ports that must be allowed in the firewall for a successful HCX cloud deployment in the destination environment.

VMware Cloud Director Pre-requisites

Make sure the following is already configured in VCD:

1: VCD Public Address is set and load balancer cert is imported (for multi-cell deployment)

2: RabbitMQ is installed and configured into VCD. Read More

Upgrading HCX Interconnect Appliances via API

HCX Interconnect appliances are deployed from the OVA’s which are included in the HCX Manager appliance. When HCX Manager is upgraded to a newer version, it contains the corresponding upgrade bits for IX appliances which are used to upgrade the IX components. 

There is a GET /appliances API call which when fired, looks for newer versions of OVA.  Once the newer version of ova is found, its version is then compared with the version of the deployed appliance. If both versions are the same, no action is taken. However, if the deployed version is lower than the newly discovered version, info about the upgrade is returned in the API call response. 

Below are the API calls which we need to execute for upgrading IX appliances.

Step 1: Obtain Auth Token

Read More

Upgrading HCX Manager via API

While working on an HCX related request from one of the hyperscaler, I came across an interesting ask where hyperscaler is looking for automating HCX upgrade via API. 

On checking HCX’s official API guide and swagger documentation and did not found any API to upgrade the HCX Manager appliance. The only available API’s are to upgrade HCX interconnect appliances.

After searching through internal documentation for an hour, I did not found any concrete info and I decided to explore the Network Inspect option in a browser which exposes APIs for any operation you trigger from UI. 

In this post, I am going to demonstrate what are the API calls needed for a successful HCX Manager upgrade. 

Disclaimer: HCX Manager upgrade APIs are not yet supported officially and will be shipped with the upcoming release of HCX.

Step 1: Check HCX Appliance Current Upgrade Status

Although this step is optional, I would recommend verifying the current upgrade status to ensure you do not accidentally attempt to upgrade an appliance that is already on the target version. Read More

Cross-VDC Networking with VCD Data Center Groups

In this post, I will be talking about the Data Center Groups feature of VMware Cloud Director. 

A data center group acts as a Cross-VDC router that provides centralized networking administration, egress point configuration, and east-west traffic between all networks within the group. The cross-virtual data center networking feature enables organizations to stretch layer 2 networks across multiple VDCs.

Using data center groups, we can share organization networks across various VDCs. To do so we first group the virtual data centers, then create a VDC network that is scoped to the data center group. A data center group can contain between one and 16 virtual data centers that are configured to share multiple egress points. 

Note: A VDC can participate in multiple data center groups.

Now it’s time to jump into the lab and configure the same. 

1: Configure Compute Provider Scope

Before a Data Center Group can be configured, we need to specify a “Compute Provider Scope.” Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Getting Started With NSX ALB: Part-4-Load Balancer in Action

In the last post of this series, I completed NSX-T integration with NSX ALB. Now it’s time to test the load balancer. 

If you have missed the earlier post of this series, you can read them from the below links:

1: NSX ALB Introduction & Architecture

2: Avi Controller Deployment & Configuration

3: NSX ALB Integration With NSX-T

Let’s get started.

Before I dive into the lab, let me first explain the topology that I have implemented in my lab.

  • I have two of my web servers sitting on the Web-LS logical segment backed by subnet 192.40.40.0/24.
  • Logical segments ALB-Mgmt-LS and ALB-Data-LS are connected to the same Tier-1 gateway to which Web-LS segment is connected and are backed by subnets 192.20.20.0/24 and 192.30.30.0/24.
  • Avi Service Engine VM’s are connected to both Mgmt & Data LS
  • All 3 segments are created in the overlay transport zone. 
  • My Tier-0 gateway is BGP peering with a physical router and my Win JB machine is able to ping the logical segments default gateway. 
Read More