Tanzu Kubernetes Grid multi-cloud Integration with VCD – Greenfield Installation

With the release of Container Service Extension 3.0.3, service providers can integrate Tanzu Kubernetes Grid multi-cloud (TKGm) with VCD to offer Kubernetes as a Service to their tenants. TKGm integration in addition to existing support for Native K8 and vSphere with Tanzu (TKGS) has truly transformed VCD into a developer-ready cloud. 

With Tanzu Basic (TKGm &TKGS) on VCD, tenants have a choice of deploying K8s in three different ways: 

  • TKGS:  K8 deployment on vSphere 7 which requires vSphere Pod Service 
  • TKGm: Multi-tenant K8 deployments that do not need vSphere Pod Service. 
  • Native K8: Community supported Kubernetes on VCD with CSE 

By offering multi-tenant managed Kubernetes services with Tanzu Basic and VCD, Cloud providers can attract developer workloads starting with test/dev environments to their cloud. Once developers have grown confidence in the K8 solution, application owners can leverage the VCD-powered clouds to quickly deploy test/dev K8s clusters on-premise and accelerate their cloud-native app development and transition to production environments.Read More

Native Kubernetes in VCD using Container Service Extension 3.0

Introduction

VMware Container Service is an extension to Cloud Director which enables VCD cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE integration with VCD has allowed CSPs to provide true developer-ready cloud offering to VCD tenants. Tenants can quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps. Once the K8 cluster is available, developers can use their native Kubernetes tooling to interact with the cluster.

To know more about the architecture and interaction of CSE components, please see my previous blog on this topic.

Container Service Extension 3.x went GA earlier this year and brought several new features/enhancements and one of them is supporting Tanzu Kubernetes Grid multi-cloud (TKGm) for K8 deployments and thus unlocking the full potential of consistent upstream Kubernetes in their VCD powered clouds.Read More

Upgrade Tanzu Kubernetes Grid from v1.3.x to 1.4.x

Tanzu KubernetestaTanzu Kubernetes Grid 1.4 is all set to be released today. A lot of new features are coming up with this release including (not limited to):

  • K8’s versions including 1.21.2, 1.20.8, and 1.19.12 are supported with the 1.4 release.
  • Support for NSX Advanced Load Balancer versions 20.1.3 & 20.1.6. 
  • New vSphere configuration variables “VSPHERE_REGION” and “VSPHERE_ZONE” are added to enable CSI storage for workload clusters in vSphere environments with multiple data centers or clusters.
  • Supports L7 ingress (using NSX ALB) for workload clusters.
  • AKO deployment is fully automated. No need to install it via helm or using custom yaml for deployment as both AKO and AKO Operator are provided as core packages. 

It’s a good time to upgrade TKG in the lab and through this blog post, I will walk through the steps of upgrading TKGm from v1.3 to v1.4. 

Upgrade Procedure

Step 1: Download and install the new version of the Tanzu CLI and Kubectl on the bootstrapper machine. Read More

Tanzu Kubernetes Grid Ingress With NSX Advanced Load Balancer

NSX ALB delivers scalable, enterprise-class container ingress for containerized workloads running in Kubernetes clusters. The biggest advantage of using NSX ALB in a Kubernetes environment is that it is agnostic to the underlying Kubernetes cluster implementations. The NSX ALB controller integrates with the Kubernetes ecosystem via REST API and thus can be used for ingress & L4-L7 load balancing solution for a wide variety of Kubernetes implementation including VMware Tanzu Kubernetes Grid.

NSX ALB provides ingress and load balancing functionality for TKG using AKO which is a Kubernetes operator that runs as a pod in the Tanzu Kubernetes clusters and translates the required Kubernetes objects to Avi objects and automates the implementation of ingresses/routes/services on the Service Engines (SE) via the NSX ALB Controller.

The diagram below shows a high-level architecture of AKO interaction with NSX ALB.

AKO interacts with the Controller & Service Engines via API to automate the provisioning of Virtual Service/VIP etc.Read More