vSphere with Tanzu Integration in VCD

Overview

Prior to v10.2, VMware Cloud Director supported K8 cluster deployment natively and integrated with ENT-PKS. With the release of v10.2, K8 integration is extended to vSphere with Tanzu. This integration enables Service Providers to create a self-service platform for Kubernetes Clusters that are backed by the vSphere 7.0 and NSX-T 3.0. By using Kubernetes with VMware Cloud Director, you can provide a multi-tenant Kubernetes service to your tenants.

In this article, I will walk through the steps of integrating vSphere with Tanzu with VCD. 

Pre-requisites for Tanzu Integration with VCD

Before using vSphere With Tanzu with VCD, you have to meet the following pre-requisites:

  • VMware Cloud Director appliance deployed & initial configuration completed. Please see VMware’s official documentation on how to install & configure VCD.
  • vCenter 7.0 (or later version) with an enabled vSphere with VMware Tanzu functionality added to VMware Cloud Director. This is done under Resources > Infrastructure Resources > vCenter Server Instances. For instructions on how to configure vSphere With Tanzu, please see this Article
  • NSX-T 3.0 instance is registered with VCD. This is done under Resources > Infrastructure Resources > NSX-T Managers. For instructions on how to configure NSX-T in VCD, please see this Article
  • Geneve-backed Network Pool is configured.
  • Storage policy to be used with K8’s in VCD should have an alphanumeric name. Special characters (other than a hyphen) are not allowed in the name. Since the default vSAN policy name has spaces in between, I cloned the profile to create a new one.

  • The IP address ranges for the Ingress CIDRs and Services CIDR parameters must not overlap with IP addresses 10.96.0.0/12 and 192.168.0.0/16 which are the default values for TKG.

In my lab, I am using the 172.18.x.x network which is totally different from the 2 subnets mentioned above. 

Once you have met the above prerequisites, you are ready to configure vSphere with Tanzu in VCD. 

Create a Provider VDC backed by a Supervisor Cluster

1: Log in to VCD as admin user and navigate to Resources > Cloud Resources > Provider VDCs and click NEW

2: Specify the name for the Provider VDC and select the vCenter server on the Provider page. 

3: Select the Cluster or Resource Pool that will be consumed by Provider VDC for deploying Kubernetes. Clusters/Resource Pools backed by a Supervisor Cluster appear with a Kubernetes icon next to their name.

4: Click on the TRUST button to accept the certificate presented. The certificate is then added to VCD Trusted Certificates Store.

5: Select the storage policies that will be offered to Tenants for provisioning K8 clusters. 

6: On the Network page, choose “Select an NSX-T manager and Geneve Network pool”

7: Review your settings and hit the finish button to complete the PVDC creation wizard. 

It takes a couple of minutes for PVDC to provision. Provider VDCs backed by a Supervisor Cluster appear with a Kubernetes icon next to their name.

VCD automatically creates a default Kubernetes Policy for the Provider VDC. This can be verified under the PVDC configuration.

Note: In my environment, the default K8 policy was not created. This is a known issue with VCD 10.2.

To fix this you have to add the Kubernetes API Endpoint (Supervisor Cluster) certs to VCD Trusted Certificate Store. VMware KB-83583 lists the procedure for adding the same.  

Once you have applied the fix, create an organization and organization VDC.  

Publish the Kubernetes Policy to Organization VDC

The provider K8 policies allocate and manage resources from vSphere K8 enabled clusters (supervisor cluster). The K8 policy has the following parameters configured:

  • CPU/Memory: CPU and Memory allocated per namespace in vSphere. 
  • Machine Class: There are two machine classes offered based on CPU/Memory. The best effort machine class doesn’t reserve CPU/Memory resources for Worker and Control plane node, whereas the guaranteed machine classes reserve CPU/Memory resources specified in the machine class.

1: To add the Kubernetes policy to the Org VDC, navigate to OVDC > Policies > Kubernetes and click on Add.

2: Select the K8 policy that you want to publish. 

3: Adjust the CPU/Memory limit. A tenant can’t use more resources than what is allocated to them. 

4: Select the machine Classes that will be part of the K8 policy. Tenants can choose from the available machine class when provisioning K8 clusters. 

5: Select the Storage Policy and adjust the storage quota. 

6: Review your settings and hit Finish to publish the policy. 

In the backend, the published policy has created a namespace for the tenant. When tenants provision K8 clusters in their OVDC, the worker and control plane nodes are deployed under this namespace.

The resource pool belonging to this namespace has the limits configured for CPU and Memory as “allocated” in the Organization VDC.

Publish the Container Plugin to the Organization

In order to deploy the Kubernetes cluster, tenants need access to the Kubernetes Container Plugin. Service Provider publishes the container UI plugin to the tenants by navigating to More > Customize Portal

Select the Container UI Plugin and click on Publish. 

Select the scope for publishing. 

Publish the TKG Cluster Right Bundle to the Organization

The tenant has got access to the Container UI plugin, but it is not enough to spin up K8 clusters. Service Provider needs to publish the TKG Cluster Entitlement to the tenant.

Navigate to Administration > Rights Bundles and select the vmware:tkgcluster Entitlement right and click on Publish

Select the tenants to which entitlement will be published and hit save. 

Create a Role with TKG Permissions

Permission to deploy K8 clusters is not included in the default organization administrator role. You have to create a new role and include the Kubernetes related permissions.

The simplest way to do this is to clone the org admin role to a new one and modify the role to include the permissions which are shown below. 

Publish the newly created role to the tenant.

Once the new role is published to the tenant, create a new user account for the tenant with the new role that you just published.

And that’s it from the service provider context. The tenants are now ready to deploy the Tanzu based K8 clusters. 

Deploy a “vSphere with Tanzu” K8 Cluster as Tenant

The tenant will now login to his organization using the credentials provided by the service provider and navigate to More > Kubernetes Container Clusters

1: To deploy a Tanzu Kubernetes Cluster, click on the New button and select vSphere with Tanzu. 

2: Enter a DNS compliant name. The name should be in lowercase only and accepts only hyphen as a special character.

3: Select the K8 policy that is exposed to your organization and choose which version of K8 to deploy. 

4: Select the number and size of control plane nodes and worker nodes. 

6: Select the Storage Class for Control Plane and Worker nodes.

7: Specify the Pod & Service CIDR for the K8 cluster. You can go with the default setting as well. 

8: Review your settings and hit Finish to trigger the Tanzu Kubernetes Cluster deployment. 

TKC deployment takes a bit of time. Mine took around 15-20 minutes to complete.

Once the deployment finishes, you can select the cluster and download the Kube config file, and import it to the machine where you have kubectl installed. 

The Kube config file contains vital information about the tanzu k8 cluster which you just deployed. Typically the file looks like this:

Save this file as ~/.kube/config in your local kubectl configuration.

Next is to create a RoleBinding that grants access to all service accounts within the default namespace using the default PSP vmware-system-privileged.

Deploy Applications on Tanzu Kubernetes Cluster

Now you are ready to deploy custom applications on your TKC deployment. 

In my lab, I have enabled harbor registry in vCenter and have pushed a sample image (Nginx) which I will be using to deploy an instance of the Nginx web server. 

I have created the yaml file for the deployment and configured harbor secret there.

Next, I invoked the kubectl command to deploy nginx: kubectl create -f nginx.yaml

a new pod instantly spun up

You can then configure service for Nginx and do the port bindings etc. to access this Nginx server from outside. 

References

VCD Official Documentation

Virten’s Blog

That’s it for this post. I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing

Leave a Reply