Table of Contents
In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG.
This series of blogs will cover the following topics:
1: HA Proxy Deployment & Configuration
3: How to Deploy Applications in TKG Cluster
Let’s get started.
Create Namespace
A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters. Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them.
To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace.
- Select the cluster where the namespace will be created and provide a name for the namespace.
- Also, select the workload network for your namespace.
Once the namespace is created, we need to assign resource limits/quotas for the same.
First, we have to assign permissions for the namespace. If you have AD/LDAP integrated with vCenter and have created users/groups in AD/LDAP, then you can control which user/group will have access to this namespace.
Since this is lab deployment, I provided access to the default SSO Admin.
To configure resource (compute & storage) limits, select the namespace and click on configure tab and go to Resource Limits page.
Click on the Edit button and specify the limit.
Download Kubernetes CLI Tools
Kubernetes CLI Tools helps you to configure & manage TKG clusters. To download CLI tools, you can connect to https://<control-plane-lb-ip>/.
This is the cluster IP that the supervisor cluster gets post-deployment.
You can download the tools for Linux/Windows and Mac OS by altering the OS selection as shown in the below screenshot.
In my lab, I downloaded the tools for Windows OS and configured the path to /bin directory under PATH environment variable so that I can run the kubectl command from anywhere within the os.
Validate Control Plane (Supervisor Cluster)
Before attempting to deploy TKG clusters, it’s important that we verify important details of the deployed control plane.
1: Connect to the namespace context.
1 |
kubectl vsphere login --server <control-plane-lb-ip> --insecure-skip-tls-verify |
Note: On prompting for credentials provide the credentials of the user who was assigned permission on the namespace.
2: Switch context to your namespace
1 |
kubectl config use-context <your-namespace-context> |
3: Get Control Plane Info
Before deploying the TKG cluster, please ensure that control plane nodes are in a Ready state and the storage policy that we specified during workload enablement is appearing as a storage class.
1 2 3 |
kubectl get nodes kubectl get storageclasses |
Also, ensure that virtual machine images that were imported in the subscribed content library are fully synchronized.
1 |
kubectl get virtualmachineimages |
Deploy & Configure TKG Cluster
Deployment of the TKG cluster is done via a manifest file in YAML format. The manifest file contains the following information:
- The number of control plane nodes.
- The number of worker nodes.
- Size of the control plane nodes.
- Which storage class to use.
- VM image to be used for the nodes.
Once the manifest file is populated with the above info, we can invoke kubectl command to deploy the TKG cluster.
In my lab, I have deployed a sample TKG cluster with 3 control plane nodes and 3 worker nodes using the below manifest file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-1 namespace: mj-tkg-cl01 spec: distribution: version: 1.18.15 topology: controlPlane: count: 3 class: best-effort-medium storageClass: vsan-default-storage-policy workers: count: 3 class: best-effort-medium storageClass: vsan-default-storage-policy |
Command to deploy TKG cluster: kubectl apply -f <manifest-file-name>
TKG deployment takes roughly 20-30 minutes and during deployment, you will see control plane and worker nodes getting deployed & configured.
Monitor TKG Deployment
Below commands can be used to monitor a TKG deployment and verify the cluster is up and ready to service workloads.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
PS C:\Users> kubectl get cluster NAME PHASE tkgs-cluster-1 Provisioned PS C:\Users> kubectl get nodes NAME STATUS ROLES AGE VERSION 4208beadb68d9aa8bf76db1aa2ebdcf5 Ready master 13d v1.18.2-6+38ac483e736488 4208d6293b53ecbc3000ef077ec40392 Ready master 13d v1.18.2-6+38ac483e736488 4208e91269098fa896aa5ba1e4a2e2bc Ready master 13d v1.18.2-6+38ac483e736488 PS C:\Users> kubectl get tanzukubernetescluster NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE tkgs-cluster-1 3 3 v1.18.15+vmware.1-tkg.1.600e412 7d running |
Command kubectl describe tanzukubernetescluster provides a lot of info about a TKG cluster such as Node/VM status, Cluster API Endpoint, Pods CIDR block, etc.
And that’s it for this post. In the future posts of this series, I will demonstrate how vSphere with Tanzu works with NSX-T and NSX Advanced Load Balancer.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂