Tanzu Mission Control-Part 2-Manage Kubernetes Clusters From TMC

In the first post of this blog series, I talked about the Tanzu Mission Control solution and the benefits of using it. I also talked about the architecture and components of TMC. Now it’s time to see TMC in action. 

One of the core features of TMC is K8 cluster lifecycle management and in this post, I will walk through the steps of creating and managing the Kubernetes cluster from the TMC portal. Let’s get started.

TMC Login

To use the TMC solution, you must have a subscription to the Tanzu Mission Control cloud service. You can access the TMC portal by logging into your VMware Cloud Service portal and clicking on the VMware Tanzu Mission Control service tile. 

By default, you will land to the Clusters view from where you can create/attach the existing Kubernetes cluster with the TMC portal.

Note: For the purpose of this demonstration, I will be talking only about TKGm and TKGS cluster in this blog post. 

But before you can create or attach your K8’s cluster here, you need to have the following created in advance:

  • Cluster Groups
  • Workspaces
  • Policies
  • Provisioner

Attaching TKG Clusters

When we deploy a TKGm cluster, the first cluster that you deploy is the Management Cluster, followed by one or more Tanzu Kubernetes Cluster (workload cluster).

Registering TKG Management Cluster

Before a workload cluster can be registered in the TMC portal, we need to register the Management Cluster first. Once the management cluster is attached, all associated workloads clusters can be then registered with TMC.

To register the management cluster, navigate to Administration > Management Clusters > Register management Cluster > Tanzu Kubernetes Grid

Specify the name for the cluster that will appear in the TMC portal and select the cluster group in which you want to place the management cluster. 

Optionally you can provide labels and description for your cluster. 

Clicking on next will generate a command for you, that you need to execute against your Management Cluster. 

On executing the command, a namespace called ‘vmware-system-tmc’ is created along with a configmap, and secret in the management cluster. Also, Custom Resource Definitions (CRD), Role bindings, Services & Deployments, etc are created during the registration process. All this is needed to enable communication between your TKG cluster and the TMC. 

Few pods are also created under the TMC namespace.

Switch back to the TMC portal and click on verify connection to and once the connection is established, your cluster will be registered in the portal. 

Clicking on the View Cluster button will take you to the cluster, where you can see details about the cluster. 

Note: It takes a while to fetch the health status of the components and the agent extensions. 

Once the management cluster is registered, you can proceed to register the workload cluster that you have deployed. 

Registering TKG Workload Cluster

The process of registering workload cluster is same pretty much the same as Management Cluster and the registration can be initiated by navigating to Clusters > Attach Cluster

Provide a name for your cluster and place it in an appropriate cluster group. Optionally specify labels that can help you in identifying the workload clusters that you own. 

Copy the install agent command and execute it against your workload cluster.

On successful registration, the workload cluster appears under the Clusters list. 

Clicking on the cluster takes you to the overview page where a bunch of details about the cluster is displayed. 

The Nodes tab provides information about the Kubernetes nodes (control plane & worker) running. It also displays the version of Kubelet that is installed on the K8 cluster. 

The Namespaces tab lists all namespaces that are there in your workload cluster. You can also filter the namespaces by hiding the Tanzu and System namespaces. 

You can create new namespaces for the deployment of the apps, directly from the TMC portal. 

The workloads tab displays the applications/workloads that are currently deployed/running in the workload cluster. 

I have deployed an application named Yelb in my environment, and the same is displayed in the screenshot below. 

Creating TKG Clusters

Once your TKG Management Cluster is registered with TMC, you can directly create additional workload clusters from the TMC portal. The cluster creation experience is very easy and it saves you from the pain of creating and formatting the yaml file for the workload cluster creation and running fancy commands. 

To deploy a new workload cluster, click on the Create Cluster button.

Select the management cluster that you have registered earlier. 

Select the provisioner that will be used to deploy the workload cluster. I will talk about provisioner in a later blog of this series. 

Provide a name for the workload cluster and associate it with a cluster group. Again labels and description are optional but good to have them specified on the clusters. 

On the configure page, specify the following:

  • Datacenter: Select the virtual datacenter where the workload cluster will be deployed. TMC auto-discovers the backend vSphere inventory when the TKG management cluster is registered with TMC. 
  • Kubernetes Version: Self-explanatory. 
  • Network: Portgroup in vCenter where workload clusters Kubernetes vm’s will be attached. 
  • SSH Public Key: Public key that will be used to connect to the workload cluster. For TKGm deployment, this is key that you generate on your bootstrapper VM. 
  • Pod & Service CIDR: You can leave the values to the default or can change this. Make sure the subnet that you use here is not configured anywhere else in your infrastructure. 

Select the Resource Pool, VM folder, and datastore where you want to collect the workload cluster Kubernetes VM’s. As a best practice, it is recommended to segregate the management cluster from the workload cluster.

  • Select the number of control plane and worker nodes that will be deployed in the workload cluster. The Prod plan deploys the Kubernetes cluster in a manner to provide high availability for the cluster. 
  • Specify the size of the control plane by selecting the appropriate instance type.
  • Specify the cluster IP for the workload cluster. 

Node pool is the settings related to the worker nodes that will be deployed. You can customize the settings as per your infrastructure. 

Click on the Create Cluster button to start deploying your workload cluster. 

Cluster deployment takes some time. For me, it took roughly 20 minutes for cluster deployment and fetching the health status of the cluster. 

Once the deployment is completed, you can view the details about the cluster. 

You can follow the same procedure for creating or attaching the TKGS (vSphere With Tanzu) clusters with TMC.

That’s it for this post. In the next post of his series, I will be talking about the lifecycle management of the Kubernetes cluster using TMC. Stay Tuned!!!

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

Leave a Reply