Cross-VDC Networking with VCD Data Center Groups

In this post, I will be talking about the Data Center Groups feature of VMware Cloud Director. 

A data center group acts as a Cross-VDC router that provides centralized networking administration, egress point configuration, and east-west traffic between all networks within the group. The cross-virtual data center networking feature enables organizations to stretch layer 2 networks across multiple VDCs.

Using data center groups, we can share organization networks across various VDCs. To do so we first group the virtual data centers, then create a VDC network that is scoped to the data center group. A data center group can contain between one and 16 virtual data centers that are configured to share multiple egress points. 

Note: A VDC can participate in multiple data center groups.

Now it’s time to jump into the lab and configure the same. 

1: Configure Compute Provider Scope

Before a Data Center Group can be configured, we need to specify a “Compute Provider Scope.” Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Getting Started With NSX ALB: Part-4-Load Balancer in Action

In the last post of this series, I completed NSX-T integration with NSX ALB. Now it’s time to test the load balancer. 

If you have missed the earlier post of this series, you can read them from the below links:

1: NSX ALB Introduction & Architecture

2: Avi Controller Deployment & Configuration

3: NSX ALB Integration With NSX-T

Let’s get started.

Before I dive into the lab, let me first explain the topology that I have implemented in my lab.

  • I have two of my web servers sitting on the Web-LS logical segment backed by subnet 192.40.40.0/24.
  • Logical segments ALB-Mgmt-LS and ALB-Data-LS are connected to the same Tier-1 gateway to which Web-LS segment is connected and are backed by subnets 192.20.20.0/24 and 192.30.30.0/24.
  • Avi Service Engine VM’s are connected to both Mgmt & Data LS
  • All 3 segments are created in the overlay transport zone. 
  • My Tier-0 gateway is BGP peering with a physical router and my Win JB machine is able to ping the logical segments default gateway. 
Read More

Getting Started With NSX ALB: Part-3-NSX-T Integration

In the previous post of this series, I discussed Avi controller deployment and basic configuration. It’s time to integrate NSX-T with NSX ALB. High-level steps of NSX-T integration can be summarized as below:

  • Create a Content Library in vCenter
  • Deploy a Tier-1 gateway for Avi Management.
  • Create Logical Segments in NSX-T for Avi SE VM’s.
  • Create credentials for NSX-T and vCenter in Avi.
  • Register NSX-T with Avi Controller.
  • Create an IPAM profile. 

Let’s get started.

Create a Content Library in vCenter

Deployment of Avi Service Engine VM’s is done automatically by Avi Controller when we create Virtual Service. For this to work, an empty content library needs to be created in vCenter server as the controller pushes Avi SE ova into the content library and then deploys the SE VM’s. 

Deploy Tier-1 gateway for Avi Management

You can use the existing Tier-1 gateway or can deploy a new one (dedicated) for Avi management.Read More

Getting Started With NSX ALB: Part-2- Avi Controller Deployment & Configuration

The first post of this series talked about NSX ALB ad its architecture. Also, I discussed features that make NSX ALB unique. In this post, I will discuss deployment and basic configuration and later I will discuss ALB integration with NSX-T.

The official process of NSX ALB deployment is documented Here & cluster creation process Here

Hardware requirements for Avi Controllers and Service Engine VM’s are documented here

Prerequisites for NSX ALB Deployment:

  • vSphere is deployed and configured.
  • NSX-T Manager is deployed and is integrated with vCenter.T0-GW is deployed, and is paired with the physical Network using BGP.

NSX-ALB Controller OVA can be downloaded from https://customerportal.avinetworks.com

Ova deployment is straight forward and I am not covering the deployment wizard. Make sure to leave the Sysadmin login authentication key field blank when deploying controller ova. 

Once Controller VM boots up, connect to the web interface of the controller by typing https://<Avi-Controller-ip>/

Configure the controller administrator account by setting up password and email id (for password reset in case of account lockout)

Configure DNS and NTP server information.Read More

Getting Started With NSX ALB: Part-1- Introduction & Architecture

NSX Advanced Load Balancer (Formerly Avi Vantage) is a multi-cloud Software Defined Load Balancer which provides scalable application delivery across any infrastructure. NSX ALB is 100% software-defined and provides:

  • Multi-cloud: Consistent experience across on-premises and cloud environments through central management and orchestration.
  • Intelligence: Built-in analytics drive actionable insights that make autoscaling seamless, automation intelligent and decision making easy.
  • Automation: 100% RESTful APIs enable self-service provisioning and integration into the CI/CD pipeline for application delivery.

Note: NSX ALB solution came through VMware acquisition of Avi Networks in 2019.

Some of the key features of NSX ALB are:

  • Autoscaling of Load Balancers and Applications.
  • Web Application Analytics & Performance Insights.
  • Automation for IT, Self-Service for Developers.

To know more about these features, please visit Avi Networks website. 

NSX ALB Architecture

NSX-ALB Consists of two main components,

  • Avi Controller.
  • Service Engines (SE).

Controllers are deployed by platform administrator and Service Engines are automatically deployed by the controller when we create Virtual Services.Read More

Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More