Native Kubernetes in VCD using Container Service Extension 3.0

Introduction

VMware Container Service is an extension to Cloud Director which enables VCD cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE integration with VCD has allowed CSPs to provide true developer-ready cloud offering to VCD tenants. Tenants can quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps. Once the K8 cluster is available, developers can use their native Kubernetes tooling to interact with the cluster.

To know more about the architecture and interaction of CSE components, please see my previous blog on this topic.

Container Service Extension 3.x went GA earlier this year and brought several new features/enhancements and one of them is supporting Tanzu Kubernetes Grid multi-cloud (TKGm) for K8 deployments and thus unlocking the full potential of consistent upstream Kubernetes in their VCD powered clouds.Read More

vSphere with Tanzu Integration in VCD

Overview

Prior to v10.2, VMware Cloud Director supported K8 cluster deployment natively and integrated with ENT-PKS. With the release of v10.2, K8 integration is extended to vSphere with Tanzu. This integration enables Service Providers to create a self-service platform for Kubernetes Clusters that are backed by the vSphere 7.0 and NSX-T 3.0. By using Kubernetes with VMware Cloud Director, you can provide a multi-tenant Kubernetes service to your tenants.

In this article, I will walk through the steps of integrating vSphere with Tanzu with VCD. 

Pre-requisites for Tanzu Integration with VCD

Before using vSphere With Tanzu with VCD, you have to meet the following pre-requisites:

  • VMware Cloud Director appliance deployed & initial configuration completed. Please see VMware’s official documentation on how to install & configure VCD.
  • vCenter 7.0 (or later version) with an enabled vSphere with VMware Tanzu functionality added to VMware Cloud Director. This is done under Resources > Infrastructure Resources > vCenter Server Instances.
Read More

HCX Integration With VMware Cloud Director 10x

This blog post provides an overview of the HCX installation workflow for VMware Cloud Director based Clouds.

The below diagram taken from VMware official docs shows the high-level architecture of HCX architecture for VCD based clouds.

HCX Cloud System & Network Requirements

Before starting HCX Cloud installation, please ensure that you’ve met all the System and Network Port/Protocol requirements. These are documented Here

Firewall Requirements

  • The site’s WAN firewall will need to allow inbound HTTPS connections destined for the HCX Cloud. HCX Cloud will make outbound HTTPS requests.
  • The HCX Cloud site firewall also needs to allow inbound UDP-500 and UDP-4500 connections destined for the HCX appliances.
  • All other flows allow HCX to integrate with VMware SDDC components, typically these are not firewalled within the datacenter

The below diagram shows various ports that must be allowed in the firewall for a successful HCX cloud deployment in the destination environment.

VMware Cloud Director Pre-requisites

Make sure the following is already configured in VCD:

1: VCD Public Address is set and load balancer cert is imported (for multi-cell deployment)

2: RabbitMQ is installed and configured into VCD. Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

VCD Container Service Extension Series-Part 4: Tenant Onboarding & K8 Cluster Deployment

In last post of this series, we learn how to install and integrate CSE plugin with VCD for easier management of Kubernetes container. In this post we will learn how tenants can leverage CSE plugin to deploy K8 clusters.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

3: CSE Plugin Integration With VCD

Onboarding Tenants

Before a tenant can start provisioning K8 cluster from CLI or UI (via CSE plugin), we need to enable the tenant to do so. This can be done directly from CSE server or login to any machine where vcd-cli utility is installed. To onboard a tenant, use following commands:

Note: These commands needs to be run as VCD system admin. 

# vcd login vcd.vstellar.local system admin -iw

# vcd right add -o <org-name> “{cse}:CSE NATIVE DEPLOY RIGHT”

Example: # vcd right add -o cse_org “{cse}:CSE NATIVE DEPLOY RIGHT”

Rights added to the Org ‘cse_org’

Note: At this point of time, if we run command vcd cse ovdc list, it will show us no K8 provider has been configured for the tenants.Read More

VCD Container Service Extension Series-Part 3: CSE Plugin Integration With VCD

In last post of this series, I explained how to set up CSE server. In this post we will look at steps of integrating CSE plugin in VMware Cloud Director, so that tenants can spin K8’s cluster from VCD portal.

If you have landed directly on this post by mistake, I would recommend reading previous articles from this series.

1: Container Service Extension Introduction & Architecture

2: CSE Server Installation

Latest and greatest version of CSE plugin can be downloaded from Here

CSE plugin installation is taken care by Cloud Provider. Post installation, provider can choose to publish plugin to all/specific tenants.

Login to VCD as system admin and navigate to Home > More > Plugins page. 

Click on upload button to start the wizard. Clicking on Select Plugin File allow you to browse to location where plugin file is downloaded.

Select the scope of publishing CSE plugin. Service provider can publish this plugin to all or subset of tenants. Read More

VCD Container Service Extension Series-Part 1: Introduction & Architecture

I was working on VMware Container Service Extension (CSE) for the last 2 weeks and it was a great learning opportunity for me. My CSE deployment did not go smoothly and I faced many issues with very little or no idea on how to fix them. But kudos to Joe Mann for lending a helping hand to fix all infra-related issues.

Through this blog series, I want to pen down my experience of working with CSE and the challenges which I encountered, and how those issues were resolved.

What is VMware Container Service Extension?

VMware Container Service is an extension to Cloud Director which enables cloud providers to offer Kubernetes-as-a-Service (on top of VCD) to their tenants. Kubernetes as a service helps tenants to quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps.Read More

VCD Container Service Extension Series-Part 2: CSE Server Installation

In first Post of this series, I talked about high level architecture of CSE infrastructure. I also discussed about various components that makes up the CSE platform. In this post I will walk through steps of installing & configuring CSE server.

CSE Installation Prerequisites

Before starting with CSE server installation, make sure following requirements are met:

1: VCD installed & configured: For Lab/POC environment, single node VCD installation is sufficient. For production environment 3 or more nodes (configured behind lb) is recommended.

2: Organization & Catalog for CSE: Dedicated Org created in VCD for CSE consumption. This org should have a Routed Org Network which has outbound connectivity to internet. Also this org should have a catalog created in advance. This catalog holds the K8’s ready vApp templates and will be shared to tenants for consumption.

3: AMQP broker configured in VCD: To extend VCD Public API, AMQP broker needs to be configured beforehand. Read More