Deploying Tanzu on VDS Networking-Part 3: Create Namespace & Deploy TKG Cluster

In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Let’s get started.

Create Namespace

A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters.  Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them. 

To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace. 

  • Select the cluster where the namespace will be created and provide a name for the namespace.  
Read More

Deploying Tanzu on VDS Networking-Part 2: Configure Workload Management

In the first post of this series, I talked about vSphere with Tanzu architecture and explained the deployment & configuration of the HA Proxy Appliance which acts as a load balancer for TKG Guest Clusters and Supervisor Cluster. 

In this post, I will walk through steps of enabling workload management which basically deploys & configure the supervisor cluster. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Note: Before enabling Workload Management, make sure you have vSphere with Tanzu license available with you. If you don’t have the license, you can register for this product to get a 60 days evaluation license.

Prerequisites for Enabling Workload management

  • A vSphere Cluster created and both DRS and HA are enabled on the cluster.
  • A vSphere Distributed Switch created and all ESXi hosts are added to the VDS.
Read More

Deploying Tanzu on VDS Networking-Part 1: HA Proxy Configuration

VMware Tanzu is the suite or portfolio of products and solutions that allow its customers to Build, Run, and Manage containerized applications (Kubernetes controlled). An earlier release of Tanzu was offered along with VCF leveraging NSX-T networking. Later VMware decoupled Tanzu from VCF and the solution is called vSphere with Tanzu and can be deployed directly on vSphere VDS without having NSX-T. 

vSphere with Tanzu on VDS has the following limitations: 

  • No support for vSphere Pod if NSX-T not used. 
  • No support for Harbor Image Registry.

When Tanzu (Workload Management) is enabled on vSphere, a supervisor cluster is deployed which consists of 3 nodes for high availability. So you need some kind of load balancing to distribute traffic coming to the supervisor cluster. VMware uses a customized version of HA Proxy as a software load balancer for supervisor nodes. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Networking Requirements for HA Proxy

Before I jump into the lab and walk through deployment/configuration steps, I want to discuss networking pre-requisites for HA proxy first. Read More

Nested ESXi Gotchas with VCF

Nested ESXi is a great way to quickly spin up a test/demo environment and fiddle around things in the lab. I have been doing so for quite a bit now. VCF is very dear to my heart and because VCF needs a hell lot of resources, I always test new versions/features in my nested lab.

Nested ESXi doesn’t always behave nicely and sometimes gives you a hard time and I encountered this recently in one of my VCF deployments. 

What was the problem and how it started?

The problem was with ESXi UUID and due to which vSAN configuration was failing. I will talk about more this later in this post. 

To save time, I created a nested ESXi template following this article. Deployed few ESXi hosts and everything was working fine. One day I tweaked my template to inject some advanced parameters and booted the template VM. This generated a new UUID entry for ESXi in /etc/vmware/esx.confRead More

NSX-T Routing With OSPF

Introduction

NSX-T 3.1.1 introduced support for OSPFv2 routing protocol for Tier-0 gateways. This feature was one of the most awaited features for some time. The introduction of OSPF to NSX-T solves one of the major hindrances that was stopping customers from migrating to NSX-T.

There are lots of customers who are still running NSX-V in their environment and OSPF as routing protocol used in their infrastructure. Now since NSX-T supports OSPF, customers can do a greenfield deployment of NSX-T and switch workloads from NSX-V to NSX-T using the L2 bridge and without much changes to their physical network.

Since this feature is pretty new, it will be interesting to see how soon customers adopt this in their environment. 

Disclaimer: This post is inspired by an original blog post written by  Peter Milchov

Before jumping into the lab, let’s revisit some important facts associated with OSPF support.

  • NSX-T 3.1.1 supports OSPFv2 only.
Read More