TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Tanzu Kubernetes Grid 1.4 Installation in Internet-Restricted Environment

An air gap (aka internet-restricted) installation method is used when the TKG environment (bootstrapper and cluster nodes) is unable to connect to the internet to download the installation binaries from the public VMware Registry during TKG install, or upgrades. 

Internet restricted environments can use an internal private registry in place of the VMware public registry. An example of a commonly used registry solution is Harbor

This blog post covers how to install TKGm using a private registry configured with a self-signed certificate.

Pre-requisites of Internet-Restricted Environment

Before you can deploy TKG management and workload clusters in an Internet-restricted environment, you must have:

  • An Internet-connected Linux jumphost machine that has:
    • A minimum of 2 GB RAM, 2 vCPU, and 30 GB hard disk space.
    • Docker client installed.
    • Tanzu CLI installed. 
    • Carvel Tools installed.
    • A version of yq greater than or equal to 4.9.2 is installed.
  • An internet-restricted Linux machine with Harbor installed.
Read More

Tanzu Kubernetes Grid multi-cloud Integration with VCD – Greenfield Installation

With the release of Container Service Extension 3.0.3, service providers can integrate Tanzu Kubernetes Grid multi-cloud (TKGm) with VCD to offer Kubernetes as a Service to their tenants. TKGm integration in addition to existing support for Native K8 and vSphere with Tanzu (TKGS) has truly transformed VCD into a developer-ready cloud. 

With Tanzu Basic (TKGm &TKGS) on VCD, tenants have a choice of deploying K8s in three different ways: 

  • TKGS:  K8 deployment on vSphere 7 which requires vSphere Pod Service 
  • TKGm: Multi-tenant K8 deployments that do not need vSphere Pod Service. 
  • Native K8: Community supported Kubernetes on VCD with CSE 

By offering multi-tenant managed Kubernetes services with Tanzu Basic and VCD, Cloud providers can attract developer workloads starting with test/dev environments to their cloud. Once developers have grown confidence in the K8 solution, application owners can leverage the VCD-powered clouds to quickly deploy test/dev K8s clusters on-premise and accelerate their cloud-native app development and transition to production environments.Read More

Upgrade Tanzu Kubernetes Grid from v1.3.x to 1.4.x

Tanzu KubernetestaTanzu Kubernetes Grid 1.4 is all set to be released today. A lot of new features are coming up with this release including (not limited to):

  • K8’s versions including 1.21.2, 1.20.8, and 1.19.12 are supported with the 1.4 release.
  • Support for NSX Advanced Load Balancer versions 20.1.3 & 20.1.6. 
  • New vSphere configuration variables “VSPHERE_REGION” and “VSPHERE_ZONE” are added to enable CSI storage for workload clusters in vSphere environments with multiple data centers or clusters.
  • Supports L7 ingress (using NSX ALB) for workload clusters.
  • AKO deployment is fully automated. No need to install it via helm or using custom yaml for deployment as both AKO and AKO Operator are provided as core packages. 

It’s a good time to upgrade TKG in the lab and through this blog post, I will walk through the steps of upgrading TKGm from v1.3 to v1.4. 

Upgrade Procedure

Step 1: Download and install the new version of the Tanzu CLI and Kubectl on the bootstrapper machine. Read More

Tanzu Kubernetes Grid Ingress With NSX Advanced Load Balancer

NSX ALB delivers scalable, enterprise-class container ingress for containerized workloads running in Kubernetes clusters. The biggest advantage of using NSX ALB in a Kubernetes environment is that it is agnostic to the underlying Kubernetes cluster implementations. The NSX ALB controller integrates with the Kubernetes ecosystem via REST API and thus can be used for ingress & L4-L7 load balancing solution for a wide variety of Kubernetes implementation including VMware Tanzu Kubernetes Grid.

NSX ALB provides ingress and load balancing functionality for TKG using AKO which is a Kubernetes operator that runs as a pod in the Tanzu Kubernetes clusters and translates the required Kubernetes objects to Avi objects and automates the implementation of ingresses/routes/services on the Service Engines (SE) via the NSX ALB Controller.

The diagram below shows a high-level architecture of AKO interaction with NSX ALB.

AKO interacts with the Controller & Service Engines via API to automate the provisioning of Virtual Service/VIP etc.Read More

Monitor Tanzu Kubernetes Cluster with Prometheus & Grafana

Introduction

Monitoring is the most important part of any infrastructure. Day-2 operations are heavily dependent on the monitoring/alerting/logging aspects. Containerized applications are now part of almost every environment and monitoring Kubernetes cluster eases the management of containerized infrastructure by tracking utilization of cluster resources.

As a Kubernetes operator, you would want to receive alerts if the desired number of pods are not running, if the resource utilization is approaching critical limits, or when failures or misconfiguration cause pods or nodes to become unable to participate in the cluster.

Why Kubernetes monitoring is a challenge?

Kubernetes abstracts away a lot of complexity to speed up application deployment; but in the process, it leaves you blind as to what is actually happening behind the scenes, what resources are being utilized, and even the cost implications of the actions being taken. In a Kubernetes world, the number of components is typically more than traditional infrastructure, which makes root cause analysis more difficult when things go wrong.Read More

Centralized Logging For TKG using Fluentbit and vRealize Log Insight

Monitoring is one of the most important aspects of a production deployment. Logs are the savior when things go haywire in the environment, so capturing event logs from the infrastructure pieces is very critical. Day-2 operations become easy if you have comprehensive logging and alerting mechanism in place as it allows for a quick response to failures in infrastructure. 

With the increasing footprint of K8 workloads in the datacenter, centralized monitoring for K8 is a must configure thing. The application developers who are focused on developing and deploying containerized applications are usually not well versed with backend infrastructure.

So if a developer finds any errors in the application logs, they might not find out that the issue is causing because of an infrastructure event in the backend, because centralized logging is not in place and infrastructure logs are stored in a different location than the application logs.

The application and infrastructure logs should be aggregated so that it’s easier to identify the real problem that’s affecting the application. Read More

Integrating Custom Registries with Tanzu Kubernetes Grid 1.3

Introduction

Tanzu Kubernetes Grid can be configured with a private registry for the rapid deployment of K8 workloads. Although there are a variety of container and artifact registries out there, Harbor has drawn attention because of its accessibility and ease of use, and rich feature set.

Although public registries are out there on the internet, they might contain everything you are looking for. In that case, you can create a custom Harbor registry to push custom K8 images to be used within your organization. A standalone Harbor registry is a perfect use case for an air-gapped TKG deployment.

In my last post, I have documented the steps of deploying a private Harbor registry for TKG. This post will show how you can leverage the registry to push/pull images for your K8 deployment. 

I have created a new project (named manish) in Harbor and I will be pushing images in that custom project.Read More

Deploying Harbor Registry for Tanzu Kubernetes Grid

Introduction

Harbor is an open-source registry that is used to store the containerized images that will be consumed by the Docker/Kubernetes platform. The images stored in the Harbor registry are secured using policies and role-based access control. Harbor, delivers compliance, performance, and interoperability to help you consistently and securely manage artifacts across cloud-native compute platforms like Kubernetes and Docker.

Why harbor

Harbor not only provides a container registry but also can do vulnerability scanning and trust signing of your docker images. It also has a really smooth web interface that allows you to do things like RBAC, project creation, user management, and more.

Harbor supports the replication of images between registries and also offers advanced security features such as user management, access control, and activity auditing. 

Harbor Deployment Model

Harbor can be deployed both as a regular workload or as a K8 instance. Deploying as a K8 instance is very handy if you already have a Kubernetes management cluster.Read More

Tanzu Kubernetes Grid 1.3 Deployment with NSX ALB in VMC

Tanzu Kubernetes Grid 1.3 brought many enhancements with it and one of them was the support for NSX Advanced Load Balancer for load balancing the Kubernetes based workloads. TKG with NSX ALB is fully supported in VMC on AWS. In this post, I will talk about the deployment of TKG v1.3 in VMC. 

In this post, I will not cover the steps of NSX ALB deployment as I have already documented it here

Prerequisites

Before starting the TKG deployment in VMC, make sure you have met the following prerequisites:

  • SDDC is deployed in VMC and outbound access to vCenter is configured. 
  • Segments for NSX ALB (Mgmt & VIP) are created.
  • NSX ALB Controllers and Service Engines are deployed and controllers’ initial configuration is completed. 

Deployment Steps

Create Logical Segments & Configure DHCP

Create 2 DHCP enabled logical segments, (one for the TKG Management and one for the TKG Workload) in your SDDC by navigating to Networking & Security > Network > Segments.Read More