Tanzu Kubernetes Grid 1.4 Installation in Internet-Restricted Environment

An air gap (aka internet-restricted) installation method is used when the TKG environment (bootstrapper and cluster nodes) is unable to connect to the internet to download the installation binaries from the public VMware Registry during TKG install, or upgrades. 

Internet restricted environments can use an internal private registry in place of the VMware public registry. An example of a commonly used registry solution is Harbor

This blog post covers how to install TKGm using a private registry configured with a self-signed certificate.

Pre-requisites of Internet-Restricted Environment

Before you can deploy TKG management and workload clusters in an Internet-restricted environment, you must have:

  • An Internet-connected Linux jumphost machine that has:
    • A minimum of 2 GB RAM, 2 vCPU, and 30 GB hard disk space.
    • Docker client installed.
    • Tanzu CLI installed. 
    • Carvel Tools installed.
    • A version of yq greater than or equal to 4.9.2 is installed.
  • An internet-restricted Linux machine with Harbor installed.
Read More

Resizing TKGm Cluster in VCD

This blog post explains how to resize (horizontal scale) a CSE provisioned TKGm cluster in VCD. 

In my lab, I deployed a TKGm cluster with one control plane and one worker node. 

To resize the cluster through the VCD UI, go to the Kubernetes Container Clusters page and select the TKGm cluster to resize. Click on the Resize option.

Select the number of worker nodes you want in your TKGm cluster and click the Resize button.Read More

Error Deploying Container Service Extension 3.1.1 – No module named ‘_sqlite3’

Container Service Extension 3.1.1 was released a few days back with new enhancements. The release announcements were made here and here.

Although the deployment procedure hasn’t changed much, mine was not smooth and I faced a couple of hiccups. This blog post discusses the problem I experienced and how I resolved it.

After installing VCD-CLI using pip, I was unable to execute any VCD command. The command was throwing an error as shown below:

Read More

Unable to delete TKGm clusters in VCD

I encountered an issue while playing with Container Service Extension 3.1.1 in my lab where I was unable to construct TKGm clusters. During troubleshooting, I discovered that the Rights Bundle “cse:nativeCluster Entitlement” was missing certain critical rights that are newly added with CSE 3.1.1.

On attempting to delete the failed clusters, the clusters stuck in the state “DELETE:IN_PROGRESS”.

On attempting to delete the failed cluster via vcd-cli, the operation failed with the error “RDE_ENTITY_NOT_RESOLVED

Read More

NSX ALB Integration with VCD-Part 5: Load Balancing in Action

Welcome to the last post of this series. I am sure if you are following this blog series, then you have got yourself familiar with how NSX ALB integrates with VCD to provide “Load Balancing as a Service (LBaaS)”

In this post, I will demonstrate how tenants can leverage NSX ALB to create load balancer constructs (Virtual Services, Pools, etc)

If you haven’t read the previous posts in this series, I recommend you do so using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

3: Implementing Dedicated Service Engine Groups Design

4: Implementing Shared Service Engine Groups Design

Tenant vStellar has deployed a couple of servers that are connected to a routed network “Prod-GW” and have got IP addresses 192.168.40.5 and 192.168.40.6 respectively. 

Both servers are running an HTTP web server and are accessible via their local IP.

The tenant is looking for load balancing these web servers by leveraging NSX ALB.Read More

NSX ALB Integration with VCD-Part 4: Shared Service Engine Groups

Welcome to the 4th part of the NSX Advanced Load Balancer Integration with VMware Cloud Director series. The first post in this series covered Service Engine design topologies, while the second covered the processes for enabling “Load Balancing as a Service” in VCD. The deployment of the Dedicated Service Engine design was demonstrated in the third post.

This post will talk about the implementation of the Shared Service Engines design.

If you haven’t read the previous posts in this series, I recommend you do so using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

3: Implementing Dedicated Service Engine Groups Design

In Shared Service Engine Group design, tenant’s Edge Gateways can leverage a common Service Engine Group for the load balancer and virtual services placement. Since VCD tenants can have overlapping org networks implemented in their respective org’s, data traffic segregation is achieved by implementing VRF’s in NSX ALB.  Read More

NSX ALB Integration with VCD-Part 3: Dedicated Service Engine Groups

I discussed the supported design for NSX ALB integration with the VMware Cloud Director in the first post of this series. Part 2 of this series described how to enable “Load Balancing as a Service” in VCD. 

If you missed any of the previous posts in this series, I recommend that you read them using the links provided below.

1: NSX ALB Integration with VCD – Supported Designs

2: NSX ALB Integration in VCD

This blog post is focused on implementing the Dedicated Service Engine Groups design.

The below diagram shows the high-level overview of Dedicated SEG in VCD.

In this design, the management network of Service Engine (eth0) is attached to the tier-1 gateway dedicated for NSX ALB management and provisioned by the service provider. When a Virtual Service is created by the tenant, a logical segment corresponding to the VIP network is automatically created and gets attached to the tenant’s tier-1 gateway.Read More