Tanzu Mission Control Self-Managed – Part 3: Configure Identity Provider

Welcome to Part 3 of the TMc Self-Managed series. Part 1 concentrated on a general introduction to TMC Self-Managed, while Part 2 dived into the DNS configuration. You may read the previous entries in this series if you missed them by clicking the links below.

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

Tanzu Mission Control Self-Managed manages user authentication using Pinniped Supervisor as the identity broker and requires an existing OIDC-compliant identity provider (IDP). Examples of OIDC-compliant IDPs are Okta, Keycloak, VMware Workspace One, etc. The Pinniped Supervisor expects the Issuer URL, client ID, and client secret to integrate with your IDP.

Note: This post demonstrates configuring Okta as an IDP. Although Okta is a SaaS service and is reachable over the internet, the intent is to show how you configure upstream IDP for authentication. In an airgap environment, you may use any IDP that doesn’t require an internet connection.Read More

Tanzu Mission Control Self-Managed – Part 2: Configure DNS

I covered the basic introduction of TMC Self-Managed, the general architecture, and the requirements your environment needs to meet before installing TMC Self-Managed in the first post of this series. Before getting your hands dirty with the installation, I’ll cover configuring DNS Zones and records in this post.

To enable correct traffic flow and access to various TMC endpoints, TMC Self-Managed needs a DNS zone to hold the DNS records. To ensure name resolution between the objects that are deployed during TMC Self-Managed installation, create the following A records in your DNS domain. You can create a new DNS zone or can leverage an existing zone.

  • alertmanager.<my-tmc-dns-zone>
  • auth.<my-tmc-dns-zone>
  • blob.<my-tmc-dns-zone>
  • console.s3.<my-tmc-dns-zone>
  • gts-rest.<my-tmc-dns-zone>
  • gts.<my-tmc-dns-zone>
  • landing.<my-tmc-dns-zone>
  • pinniped-supervisor.<my-tmc-dns-zone>
  • prometheus.<my-tmc-dns-zone>
  • s3.<my-tmc-dns-zone>
  • tmc-local.s3.<my-tmc-dns-zone>
  • tmc.<my-tmc-dns-zone>

To simplify the installation procedure, you can also use a wildcard DNS entry as shown below (for POCs only).

Record Type           Record Name Value                      
A *.<my-tmc-dns-zone> load balancer IP
A <my-tmc-dns-zone> load balancer IP

The IP address for the above records must point to the external IP of the contour-envoy service that is deployed during the installation.Read More

Tanzu Mission Control Self-Managed – Part 1: Introduction & Architecture

Introduction

VMware Tanzu Mission Control is a SaaS offering available through VMware Cloud Services and provides:

  • A centralized platform to deploy and manage Kubernetes clusters across multiple clouds.
  • Attach existing Kubernetes Clusters in the TMC portal for centralized operations and management.
  • A Policy Engine that automates Access control and security policies across a fleet of clusters.
  • Manage security across multiple clusters.
  • Centralize authentication and authorization, with federated identity from multiple sources.

TMC SaaS cannot be used in specific environments because of compliance or data governance requirements. Industries like Banking, Health Care, and the Defence sector are usually running workloads in an air-gapped environment (dark site). Imagine running a large number of Kubernetes clusters without any central pane of glass to manage day-1 & day-2 operations across the clusters. VMware understood this pain and introduced Tanzu Mission Control Self-Managed (TMC-SM) as an installable product that you can deploy in your environment.

TMC Self-Managed can be installed in data centers, sovereign clouds, and service-provider environments.Read More

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Layer 7 Ingress in vSphere with Tanzu using NSX ALB

Introduction

vSphere with Tanzu currently doesn’t provide the AKO orchestration feature out-of-the-box. What I mean by this statement is that you can’t automate the deployment of AKO pods based on the cluster labels. There is no AkoDeploymentConfig that gets created when you enable workload management on a vSphere cluster and because of this, you don’t have anything running in your supervisor cluster to keep an eye on the cluster labels and take the decision of automated AKO installation in the workload clusters. 

However, this does not preclude you from using NSX ALB to provide layer-7 ingress for your workload clusters. AKO installation in a vSphere with Tanzu environment is done via helm charts and is a completely self-managed solution. You will be in charge of maintaining the AKO life cycle.

My Lab Setup

My lab’s bill of materials is shown below.

Component Version
NSX ALB (Enterprise) 20.1.7 
AKO 1.6.2
vSphere 7.0 U3c
Helm 3.7.4

The current setup of the NSX ALB is shown in the table below.Read More

Backup and Restore TKG Clusters using Velero Standalone

In my last post, I discussed how Tanzu Mission Control Data Protection can be used to backup and restore stateful Kubernetes applications. In this tutorial, I’ll show you how to backup and restore applications using Velero standalone.

Why Use Velero Standalone?

You might wonder why you need to use Velero standalone when TMC Data Protection exists to make the process of backing up and restoring K8 applications easier. The answer is pretty simple. Currently, TMC does not have the ability to restore data across clusters. Backup and restore are limited to a single cluster. This is not an ideal solution from a business continuity point of view.

You may simply circumvent this situation if you use Velero standalone. In this approach, Velero is installed separately in both the source and target clusters. Both clusters have access to the S3 bucket where backups will be kept. If your source cluster is completely lost due to a disaster, you can redeploy the applications by downloading the backup from S3 and then restoring it.Read More

Backing Up Stateful Applications using TMC Data Protection

Introduction

Kubernetes is frequently thought of as a platform for stateless workloads because the majority of its resources are ephemeral. However, as Kubernetes grows in popularity, enterprises are deploying more and more stateful apps. Because stateful workloads require permanent storage for application data, you can no longer simply reload them in the event of a disaster.

As businesses invest extensively in Kubernetes and deploy more and more containerized applications across multi-clouds, providing adequate data protection in a distributed environment becomes a challenge that must be addressed.

Data Protection in Tanzu Mission Control (TMC) is provided by Velero which is an open-source project. Velero backups typically include application and cluster data like config maps, custom resource definitions, secrets, and so on, which would then be re-applied to a cluster during restoration. The resources that are using a persistent volume, are backed up using Restic

In this post, I’ll show how to backup and recover a stateful application running in a Tanzu Kubernetes cluster.Read More

Using Custom S3 Storage (MinIO) with TMC Data Protection

Introduction

Data protection in TMC is provided by Velero which is an open-source project that came with the Heptio acquisition.

When data protection is enabled on a Kubernetes cluster, the data backup is stored external to the TMC. TMC supports both AWS S3 and Custom S3 storage locations to store the backups.  Configuring the AWS S3 endpoint is pretty simple as TMC provides a CloudFormation script that does all the backend tasks such as creating S3 buckets, assigning permissions, etc.

AWS S3 might not be a suitable solution in some use cases. For instance, a customer has already invested heavily in an S3 solution (MinIO, Cloudian, etc). TMC allows customers to bring their own self-provisioned AWS S3 bucket or S3-compatible on-prem storage locations for their Kubernetes clusters.

In this post, I will be talking about how you can use on-prem S3 storage for storing Kubernetes backups taken from TMC Data Protection.Read More

Tanzu Kubernetes Grid 1.4 Installation in Internet-Restricted Environment

An air gap (aka internet-restricted) installation method is used when the TKG environment (bootstrapper and cluster nodes) is unable to connect to the internet to download the installation binaries from the public VMware Registry during TKG install, or upgrades. 

Internet restricted environments can use an internal private registry in place of the VMware public registry. An example of a commonly used registry solution is Harbor

This blog post covers how to install TKGm using a private registry configured with a self-signed certificate.

Pre-requisites of Internet-Restricted Environment

Before you can deploy TKG management and workload clusters in an Internet-restricted environment, you must have:

  • An Internet-connected Linux jumphost machine that has:
    • A minimum of 2 GB RAM, 2 vCPU, and 30 GB hard disk space.
    • Docker client installed.
    • Tanzu CLI installed. 
    • Carvel Tools installed.
    • A version of yq greater than or equal to 4.9.2 is installed.
  • An internet-restricted Linux machine with Harbor installed.
Read More

Tanzu Kubernetes Grid multi-cloud Integration with VCD – Greenfield Installation

With the release of Container Service Extension 3.0.3, service providers can integrate Tanzu Kubernetes Grid multi-cloud (TKGm) with VCD to offer Kubernetes as a Service to their tenants. TKGm integration in addition to existing support for Native K8 and vSphere with Tanzu (TKGS) has truly transformed VCD into a developer-ready cloud. 

With Tanzu Basic (TKGm &TKGS) on VCD, tenants have a choice of deploying K8s in three different ways: 

  • TKGS:  K8 deployment on vSphere 7 which requires vSphere Pod Service 
  • TKGm: Multi-tenant K8 deployments that do not need vSphere Pod Service. 
  • Native K8: Community supported Kubernetes on VCD with CSE 

By offering multi-tenant managed Kubernetes services with Tanzu Basic and VCD, Cloud providers can attract developer workloads starting with test/dev environments to their cloud. Once developers have grown confidence in the K8 solution, application owners can leverage the VCD-powered clouds to quickly deploy test/dev K8s clusters on-premise and accelerate their cloud-native app development and transition to production environments.Read More