Install Container Service Extension 4.2 in an Airgap Environment

Introduction

VMware Container Service (CSE) is an extension of VMware Cloud Director that enables cloud providers to offer Kubernetes as a service to their tenants. CSE helps tenants quickly deploy the Tanzu Kubernetes Grid clusters in their virtual data centers with just a few clicks directly from the tenant portal. Tenants can manage their clusters utilizing Tanzu products and services, such as Tanzu Mission Control, in conjunction with the VMware Cloud Director Container Service Extension.

Until CSE 4.0, the deployment of TKG clusters depended on internet connectivity to get the necessary installation binaries from the VMware public image registry. There was no support for the airgap environment. 

With CSE 4.1, VMware introduced support for deploying CSE in an Airgap environment. Before diving into the nitty-gritty of configuring CSE, let’s look at the CSE airgap architecture.

CSE Airgap Architecture

The image below is from the CSE product documentation and depicts the high-level architecture and service provider workflow of CSE in an airgap setup.

In this architecture, VCD and CSE are deployed in an internet-restricted environment, along with a local image registry (e.g., Harbor) to store the artifacts required for TKG cluster deployment.

The Service Provider will download the required artifacts on an internet-connected machine located outside the airgap environment. Let’s call this machine the ‘bastion host‘. Once the artifacts have been downloaded, they must be transferred to the airgap environment using your organization’s preferred file transfer method. 

Inside the airgap environment, you can upload the artifacts either on the registry VM directly or use a dedicated machine (e.g., a transfer VM). After that, you push the artifacts into the registry that you have in your environment. The machine where you upload the artifacts should have the following tools installed:

Note: This article demonstrates using Harbor as an image registry solution.

Configuration/Deployment Steps

Follow the steps provided below to configure the airgap environment.

Download Artifacts

You need to download the following artifacts on the Bastion VM.

  • TKG installation binaries.
  • Tanzu CLI plugin bundle.
  • CSE artifacts.

Depending on the version of TKG clusters that will be deployed, download the installation binaries by following the instructions provided in the TKG product documentation. This article provides steps for TKG 2.4.0.

Note: It is assumed here that you have already installed Docker Engine, Tanzu CLI, and Tanzu Plugins on the Bastion VM.

1: Download the TKG installation binaries.

2: Download the Tanzu Plugin Bundle

3: Download CSE Artifacts

Transfer Artifacts

Transfer the downloaded TKG binaries, plugin-bundle.tar.gz, and temp_container.tar to the airgap environment and upload them to the registry/transfer VM.

Configure Image Registry

1: Create the following projects in the harbor registry:

Important Note: Don’t create projects with any other name than what is shown above. CSE has a strict dependency on the path where the TKG images are uploaded. Please refer to the CSE 4.2 documentation for more information on this. 

2: Push artifacts into the Harbor registry.

Note: If you’ve uploaded the artifacts to the transfer VM and Harbor uses self-signed certificates, you’ll need the harbor certificate on the transfer VM. In my lab, Harbor uses self-signed certificates.

2.1: Install Tanzu CLI and plugins on the transfer VM following the instructions outlined here.

2.2: Add the Harbor certificate to the Tanuz configuration.

2.3: Upload Tanzu CLI plugin bundle to Harbor

2.4: Update the Tanzu CLI to point to the new plugin source

2.5: Verify that the plugins are discoverable:

2.6: Install the CLI plugins:

2.7: Upload the TKG images.

2.8: Upload CSE images.

After you finish uploading the artifacts, login to the harbor registry and ensure you can see the images.

Configure CSE Server

If you are performing a green field deployment of CSE, refer to the product documentation for instructions. You can also refer to my blog posts for CSE deployment and configuration.

In a brownfield environment, you need to update the CSE server configuration to include your registry URL and certificate information, etc. 

To update the CSE server configuration, login to VCD as a system admin user, navigate to More > Kubernetes Container Clusters > CSE Management > Server Details, and click Update Server.

Select the Update Configuration option and click Next.

In the Container Registry Settings section, provide the registry URL. If the private registry uses self-signed certificates, add them to the Bootstrap VM and Cluster Certificate sections. This enables cluster virtual machines, such as bootstrap (ephemeral) and node VMs, to trust the private registry.

Note: Configuring private registry settings applies to both greenfield/brownfield deployments.

Restart the existing CSE Server vApp to apply the updated configuration.

Here’s an example of a TKG cluster deployed in my lab using an airgap CSE configuration.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

2 thoughts on “Install Container Service Extension 4.2 in an Airgap Environment

Leave a Reply