vSphere with Tanzu Integration in VCD

Overview

Prior to v10.2, VMware Cloud Director supported K8 cluster deployment natively and integrated with ENT-PKS. With the release of v10.2, K8 integration is extended to vSphere with Tanzu. This integration enables Service Providers to create a self-service platform for Kubernetes Clusters that are backed by the vSphere 7.0 and NSX-T 3.0. By using Kubernetes with VMware Cloud Director, you can provide a multi-tenant Kubernetes service to your tenants.

In this article, I will walk through the steps of integrating vSphere with Tanzu with VCD. 

Pre-requisites for Tanzu Integration with VCD

Before using vSphere With Tanzu with VCD, you have to meet the following pre-requisites:

  • VMware Cloud Director appliance deployed & initial configuration completed. Please see VMware’s official documentation on how to install & configure VCD.
  • vCenter 7.0 (or later version) with an enabled vSphere with VMware Tanzu functionality added to VMware Cloud Director. This is done under Resources > Infrastructure Resources > vCenter Server Instances.
Read More

vSphere with Tanzu Leveraging NSX ALB-Part-2: Deploy Supervisor Cluster

In the last post of this series, I discussed Avi Controller deployment and configuration. In this post, I will demonstrate supervisor cluster deployment which lays the foundation for the TKG clusters. 

Since I have explained the process of workload management in this post, I am not going to go through each step again. However, I want to discuss load balancer option.

  • When configuring Load Balancer, select type as Avi and punch in the IP address of the Avi controller VM followed by port 443. 
  • Specify the credentials that you configured at the time of the Avi controller deployment.
  • Grab the certificate from Avi controller and paste it here. 

Note: To grab the certificate thumbprint of the Avi Controller, navigate to Templates > Security > SSL/TLS Certificates and click on the arrow button in front of the self-signed certificate that you created for the controller VM.

Copy the contents of the certificate by clicking on the copy to clipboard button. Read More

vSphere with Tanzu Leveraging NSX ALB-Part-1: Avi Controller Deployment & Configuration

With the release of vSphere 7.0 U2, VMware introduced support of Avi Load Balancer (NSX Advanced Load Balancer) for vSphere with Tanzu, and thus fully supported load balancing is now enabled for Kubernetes. Prior to vSphere 7.0 U2, HA Proxy was the only supported load balancer when vSphere with Tanzu needed to be deployed on vSphere Distributed Switch (vDS) based networking. 

HA Proxy was not supported for production-ready workloads as it has its own limitations. NSX ALB is a next-generation load balancing solution and its integration with vSphere with Tanzu, enables customers to run production workloads in the Kubernetes cluster.

When vSphere with Tanzu is enabled leveraging NSX ALB, ALB Controller VM has access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters, and the applications/services that are deployed on top of TKG Cluster. 

The below diagram shows the high-level topology of NSX ALB & vSphere with Tanzu.

In this post, I will cover the steps of deploying & configuring NSX ALB for vSphere with Tanzu.Read More

Deploying Tanzu on VDS Networking-Part 3: Create Namespace & Deploy TKG Cluster

In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Let’s get started.

Create Namespace

A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters.  Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them. 

To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace. 

  • Select the cluster where the namespace will be created and provide a name for the namespace.  
Read More

Deploying Tanzu on VDS Networking-Part 2: Configure Workload Management

In the first post of this series, I talked about vSphere with Tanzu architecture and explained the deployment & configuration of the HA Proxy Appliance which acts as a load balancer for TKG Guest Clusters and Supervisor Cluster. 

In this post, I will walk through steps of enabling workload management which basically deploys & configure the supervisor cluster. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Note: Before enabling Workload Management, make sure you have vSphere with Tanzu license available with you. If you don’t have the license, you can register for this product to get a 60 days evaluation license.

Prerequisites for Enabling Workload management

  • A vSphere Cluster created and both DRS and HA are enabled on the cluster.
  • A vSphere Distributed Switch created and all ESXi hosts are added to the VDS.
Read More

Deploying Tanzu on VDS Networking-Part 1: HA Proxy Configuration

VMware Tanzu is the suite or portfolio of products and solutions that allow its customers to Build, Run, and Manage containerized applications (Kubernetes controlled). An earlier release of Tanzu was offered along with VCF leveraging NSX-T networking. Later VMware decoupled Tanzu from VCF and the solution is called vSphere with Tanzu and can be deployed directly on vSphere VDS without having NSX-T. 

vSphere with Tanzu on VDS has the following limitations: 

  • No support for vSphere Pod if NSX-T not used. 
  • No support for Harbor Image Registry.

When Tanzu (Workload Management) is enabled on vSphere, a supervisor cluster is deployed which consists of 3 nodes for high availability. So you need some kind of load balancing to distribute traffic coming to the supervisor cluster. VMware uses a customized version of HA Proxy as a software load balancer for supervisor nodes. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Networking Requirements for HA Proxy

Before I jump into the lab and walk through deployment/configuration steps, I want to discuss networking pre-requisites for HA proxy first. Read More

TKG Series-Part 4: Deploy Sample Workload/Application

In last post I completed the workload cluster deployment. The deployed cluster is now ready to be consumed. In this post I will show how we can deploy a sample application/workload in the newly provisioned kubernetes cluster.

If you have missed earlier post of this series, you can read them from below links:

1: TKG Infrastructure Setup

2: TKG Management Cluster Deployment

3: TKG Workload Cluster Deployment

To deploy any application in the kubernetes cluster, we first have to connect to the workload cluster context.

Below command shows that I am currently connected to cluster “mj-tkgc01”, which is my workload cluster.

Note: We can use command kubectl config use-context <cluster-context-name> to switch between the clusters.Read More

TKG Series-Part 3: Deploy TKG Workload (Kubernetes) Cluster

In last post of this series, I covered TKG management cluster setup. In this post we will learn how to deploy TKG workload/compute/kubernetes cluster.

If you have missed earlier post of this series, you can read them from below links:

1: TKG Infrastructure Setup

2: TKG Management Cluster Deployment

Before attempting to deploy TKG workload cluster, ensure you are connected to management cluster context. If you are not already connected to the context, you can use command kubectl config use-context <mgmt-cluster-context-name> to connect to the same.

Step 1: Create a new namespace

An example json file is located in kubernetes official website which I have used to create a test namespace for me. You can create your own json if you want.

# kubectl create -f https://kubernetes.io/examples/admin/namespace-dev.json

Step 2: Create yaml file for workload cluster deployment.

# tkg config cluster mj-tkgc01 –plan dev –controlplane-machine-count 1 –worker-machine-count 3 –namespace development –controlplane-size small –worker-size small > dev.yamlRead More

TKG Series-Part 2: Deploy TKG Management Cluster

In first post of this series, I discussed about prerequisites that you should meet before attempting to install TKG on vSphere. In this post I will demonstrate TKG management cluster deployment. We can create TKG Management cluster using both UI & CLI. Since I am a fan of cli method, I have used the same in my deployment.

Step 1: Prepare config.yaml file

When TKG cli is installed and tkg command is invoked for first time, it creates a hidden folder .tkg under home directory of user. This folder contains config.yaml file which TKG leverages to deploy management cluster. 

Default yaml file don’t have details of infrastructure such as VC details, VC credentials, IP address etc to be used in TKG deployment. We have to populate infrastructure details in config.yaml file manually.

Below is the yaml file which I used in my environment.

Read More

TKG Series-Part 1: Lab Setup

This blog series will cover how to deploy Tanzu Kubernetes Grid on vSphere and get your management and workload clusters provisioned. In part 1 of this series, I will cover the prerequisites that need to be met before attempting to install & configure TKG on vSphere.

Before you start with TKG deployment, make yourself familiar with the components that make up the TKG cluster. 

Hardware & Software Requirements

  • A vSphere environment with vSphere 6.7 U3 or 7.0 installed.
  • A dedicated resource pool to accommodate TKG Management & Workload cluster components.
  • A VM folder where TKG VM’s will be provisioned.
  • One DHCP enabled network segment. TKG VM’s get IP from dhcp pool.
  • TKG Ova’s & TKG CLI downloaded from here
  • A linux vm (ubuntu preferred) created in vSphere with docker & kubectl installed. This vm act as bootstrap vm on which we will install TKG CLI and other dependencies.

My Lab Walkthrough

  • I have a 4 node vSphere cluster with vSphere 6.7 U3 installed and licensed with Enterprise plus license
  • Build numbers used for ESXi & vCenter are 16316930 & 16616668 respectively.
Read More