Cross-VDC Networking with VCD Data Center Groups

In this post, I will be talking about the Data Center Groups feature of VMware Cloud Director. 

A data center group acts as a Cross-VDC router that provides centralized networking administration, egress point configuration, and east-west traffic between all networks within the group. The cross-virtual data center networking feature enables organizations to stretch layer 2 networks across multiple VDCs.

Using data center groups, we can share organization networks across various VDCs. To do so we first group the virtual data centers, then create a VDC network that is scoped to the data center group. A data center group can contain between one and 16 virtual data centers that are configured to share multiple egress points. 

Note: A VDC can participate in multiple data center groups.

Now it’s time to jump into the lab and configure the same. 

1: Configure Compute Provider Scope

Before a Data Center Group can be configured, we need to specify a “Compute Provider Scope.” Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

VCD Container Service Extension Series-Part 4: Tenant Onboarding & K8 Cluster Deployment

In last post of this series, we learn how to install and integrate CSE plugin with VCD for easier management of Kubernetes container. In this post we will learn how tenants can leverage CSE plugin to deploy K8 clusters.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

3: CSE Plugin Integration With VCD

Onboarding Tenants

Before a tenant can start provisioning K8 cluster from CLI or UI (via CSE plugin), we need to enable the tenant to do so. This can be done directly from CSE server or login to any machine where vcd-cli utility is installed. To onboard a tenant, use following commands:

Note: These commands needs to be run as VCD system admin. 

# vcd login vcd.vstellar.local system admin -iw

# vcd right add -o <org-name> “{cse}:CSE NATIVE DEPLOY RIGHT”

Example: # vcd right add -o cse_org “{cse}:CSE NATIVE DEPLOY RIGHT”

Rights added to the Org ‘cse_org’

Note: At this point of time, if we run command vcd cse ovdc list, it will show us no K8 provider has been configured for the tenants.Read More

VCD Container Service Extension Series-Part 3: CSE Plugin Integration With VCD

In last post of this series, I explained how to set up CSE server. In this post we will look at steps of integrating CSE plugin in VMware Cloud Director, so that tenants can spin K8’s cluster from VCD portal.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

Latest and greatest version of CSE plugin can be downloaded from Here

CSE plugin installation is taken care by Cloud Provider. Post installation, provider can choose to publish plugin to all/specific tenants.

Login to VCD as system admin and navigate to Home > More > Plugins page. 

Click on upload button to start the wizard. Clicking on Select Plugin File allow you to browse to location where plugin file is downloaded.

Select the scope of publishing CSE plugin. Service provider can publish this plugin to all or subset of tenants. Read More

VCD Container Service Extension Series-Part 1: Introduction & Architecture

I was working on VMware Container Service Extension (CSE) for the last 2 weeks and it was a great learning opportunity for me. My CSE deployment did not go smoothly and I faced many issues with very little or no idea on how to fix them. But kudos to Joe Mann for lending a helping hand to fix all infra-related issues.

Through this blog series, I want to pen down my experience of working with CSE and the challenges which I encountered, and how those issues were resolved.

What is VMware Container Service Extension?

VMware Container Service is an extension to Cloud Director which enables cloud providers to offer Kubernetes-as-a-Service (on top of VCD) to their tenants. Kubernetes as a service helps tenants to quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps.Read More

VCD Container Service Extension Series-Part 2: CSE Server Installation

In first Post of this series, I talked about high level architecture of CSE infrastructure. I also discussed about various components that makes up the CSE platform. In this post I will walk through steps of installing & configuring CSE server.

CSE Installation Prerequisites

Before starting with CSE server installation, make sure following requirements are met:

1: VCD installed & configured: For Lab/POC environment, single node VCD installation is sufficient. For production environment 3 or more nodes (configured behind lb) is recommended.

2: Organization & Catalog for CSE: Dedicated Org created in VCD for CSE consumption. This org should have a Routed Org Network which has outbound connectivity to internet. Also this org should have a catalog created in advance. This catalog holds the K8’s ready vApp templates and will be shared to tenants for consumption.

3: AMQP broker configured in VCD: To extend VCD Public API, AMQP broker needs to be configured beforehand. Read More

Upgrading vROPs Tenant App for VCD via CLI

In this post I will walk through how to upgrade vROPs Tenant App for Cloud Director via CLI.

Although upgrade can be performed directly from TA vami interface by logging in to https://<vrops-ta-fqdn>:5480/, but having knowledge of CLI is important specially when you are looking for automating the upgrade.

Note: Vami credentials of vROPs TA defaults to root/vmware.

Below are high level steps of upgrading the TA appliance via CLI.

Note: I have tested below steps to upgrade Tenant App from v2.3 to 2.4

Step 1: Enable SSH on TA: Login to TA appliance via vCenter console (credentials: root/vmware) and enable ssh by typing below commands:

# systemctl start sshd

# systemctl enable sshd

Step 2: Download TA Upgrade Package: Upgrade package for appliance can be downloaded from VMware Market Place under Resources tab.

Extract the downloaded iso. We need to upload the content of iso on TA in next step

Step 3: Create Upgrade Repo on TA appliance: Connect to TA appliance over ssh and run following command:

# mkdir -p /data/repo

# chmod 755 -R repo/

Now upload the extracted content in /data/repo directory via winscp or similar utility.Read More

What’s New With vCloud Availability 4.0-SLA Profiles

With the latest release of vCloud Availability, some very cool features are added in it. In this post I will discuss about one such feature called “SLA Profiles”. 

What is SLA Profiles?

This new feature brings pre-configured protection profiles to be consumed as it is.

These profiles can be assigned to all/specific tenants and are available for tenants when creating new protection/migration for virtual machines.

Each SLA profile has following attributes:

  • Target recovery point objective (RPO).
  • Retention policy for the point in time instances (snapshots).
  • Whether the quiescing is enabled.
  • Whether the compression is enabled.
  • Timeslot to delay the initial synchronization.

There are 3 SLA profiles that you will get out of the box i.e Gold, Silver & Bronze.

These profiles can be directly attached to specific organizations by clicking on Assign button.

Profile Management

SLA Profiles will be managed by the service provider. A providers can then set limits for some of the SLA attributes in a given profile and can use it in the form of policy and assign those policies to tenants so that every tenant protection fits in the policy limits.Read More