Welcome to the Tanzu Mission Control Self-Managed series. So far in this series, I have covered the installation prerequisites and how to configure them. After that, I demonstrated the TMC-SM installation procedure on the TKGm platform. if you are not following along, you can read the earlier post of this series from the below links:
1: TMC Self-Managed – Introduction & Architecture
2: Configure DNS for TMC Self-Managed
3: Configure OIDC Complaint Identity Provider (Okta)
4: Install Cluster Issuer for TLS Certificates
6: Install Tanzu Mission Control Self-Managed on TKGm
The installation procedure for TMC Self-Managed on a vSphere with Tanzu (aka TKGS) Kubernetes platform is a bit different and this post is focused on covering the required steps. Let’s get started.
I have used the following BOM in my lab
Software Components | Version |
vSphere Namespace | 1.24.9 |
VMware vSphere ESXi | 7.0 U3n |
VMware vCenter (VCSA) | 7.0 U3n |
VMware vSAN | 7.0 U3n |
NSX ALB | 22.1.3 |
Make sure the following are already configured in your environment before attempting the installation:
1: DNS is configured.
2: OIDC complaint IDP is configured.
3: Networking and NSX Advanced Load Balancer are configured as per instructions outlined in the TKGS product documentation
4: WCP is enabled as per instructions outlined in the product documentation
After WCP is enabled, download the kubectl utility by following the instructions outlined in Download and Install the Kubernetes CLI Tools page.
Step 1: Use an External Harbor Registry with Tanzu Kubernetes Clusters
In an airgap environment, a private harbor registry is needed to host TMC Self-Managed installation binaries. If the harbor registry uses a self-signed certificate, the Tanzu Kubernetes Cluster where you deploy TMC-SM must trust the registry’s self-signed certificate. This configuration is done by modifying the “tkgserviceconfigurations” in the supervisor cluster context.
1.1: Connect to the supervisor cluster using kubectl
1 |
# kubectl vsphere login --vsphere-username administrator@vsphere.local --server=<control plane vip> --insecure-skip-tls-verify=true |
1.2: List tkgserviceconfigurations
1 2 3 4 |
# kubectl get tkgserviceconfigurations NAME DEFAULT CNI tkg-service-configuration antrea |
1.3: Obtain tkgserviceconfigurations yaml
1 |
# kubectl get tkgserviceconfigurations tkg-service-configuration -o yaml > tkgsvcconfig.yaml |
Edit the obtained yaml file and add an additional section ‘trust’ with the additionalTrustedCAs field. This allows you to define any number of self-signed certificates that Tanzu Kubernetes Clusters should trust.
An example for inserting CA certificates is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TkgServiceConfiguration metadata: name: tkg-service-configuration spec: defaultCNI: antrea trust: additionalTrustedCAs: - name: first-cert-name data: base64-encoded string of a PEM encoded public cert 1 - name: second-cert-name data: base64-encoded string of a PEM encoded public cert 2 |
1.4: Update tkgserviceconfiguration.
1 |
# kubectl apply -f tkgsvcconfig.yaml |
Step 2: Create a vSphere Namespace for hosting Service TKC
As a best practice, it is advised to separate the service tkc from the workload tkcs. You can create a dedicated namespace for the tkc where TMC-SM will be installed. To create a vSphere namespace, follow the instructions outlined in the Configuring and Managing vSphere Namespaces
Step 3: Deploy a new Service TKC
3.1: Create a yaml file for tkc creation. An example yaml is shown below for reference
3.2: Create TKC
1 |
# kubectl apply -f svctkc.yaml |
3.3 Validate that the TKC has been successfully created
1 2 3 4 |
# kubectl get tkc NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE svctkc 3 5 v1.23.8---vmware.3-tkg.1 45d True True |
Step 4: Prepare SVC TKC to run Tanzu Packages
To install packages on any workload cluster created in vSphere with Tanzu, you have to configure a package repository first. In an airgap environment, the package repository is stored in the internal harbor registry. You relocate the tanzu packages to your internal repository.
To relocate tanzu packages, follow the instructions outlined in the Preparing an Internet-Restricted Environment documentation.
4.1: Connect to the Service TKC
1 2 3 |
# kubectl vsphere login --vsphere-username=administrator@vsphere.local \ --server=<supervisor vip> --insecure-skip-tls-verify \ --tanzu-kubernetes-cluster-name=<svc tkc name> --tanzu-kubernetes-cluster-namespace <svc tkc namespace> # kubectl config use-context <svc tkc name> |
4.2: Create a cluster role binding
1 |
# kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated |
4.3: Install Kapp-controller
4.3.1: Create a file tanzu-system-kapp-ctrl-restricted.yaml containing the Kapp Controller Pod Security Policy
4.3.2: Apply the tanzu-system-kapp-ctrl-restricted.yaml file to the svc tkc.
1 |
# kubectl apply -f tanzu-system-kapp-ctrl-restricted.yaml |
4.3.3: Create a file kapp-controller.yaml containing the Kapp Controller Manifest
4.3.4: Apply the kapp-controller.yaml file to the svc tkc.
1 |
# kubectl apply -f kapp-controller.yaml |
4.3.5: Modify kapp ConfigMap
By default happ-controller doesn’t trust any self-signed certificate and causes an issue when installing tanzu packages from an internal repository. To fix this issue, you have to edit the kapp-controller configMap and insert your image registry self-signed certificate.
Create a yaml as shown below and insert your registry certificate under the caCerts section.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# vim kapp-cm.yaml apiVersion: v1 data: caCerts: | -----BEGIN CERTIFICATE----- Certificate content -----END CERTIFICATE----- dangerousSkipTLSVerify: "" httpProxy: "" httpsProxy: "" noProxy: "" kind: ConfigMap metadata: name: kapp-controller-config namespace: tkg-system |
4.3.6: Apply the yaml
1 |
# kubectl apply -f kapp-cm.yaml |
4.3.7: Restart kapp-controller pod
1 |
# kubectl delete pod <kapp ctlr pod name> -n tkg-system |
4.4: Configure Package Repository
4.4.1: Add tanzu package repository
1 |
# tanzu package repository add REPOSITORY-NAME -n tanzu-package-repo-global --url PACKAGE-REGISTRY-ENDPOINT |
4.4.2: Verify that the repository was installed
1 |
# tanzu package repository list -n tanzu-package-repo-global |
Step 5: Install and Configure cert-manager
Install cert-manager in the service tkc by following the instructions outlined in the Install cert-manager in the product documentation.
Step 6: Configure cluster issuer for TLS certificates
This step is covered in the article Set up a cluster issuer for TLS certificates.
Step 7: Add TMC Self-Managed artifacts to the Harbor Registry
This is covered in Step 5 of the article Prepare Harbor Registry
Step 8: Stage the TMC Self-Managed Package on the Workload Cluster
This is covered in Step 3 of the article Install TMC Self-managed
Step 9: Create TMC Self-Managed Configuration File
This is covered in Step 5 of the article Install TMC Self-Managed
Important: The only difference is in the values.yaml file is, there would not be any serviceAnnotation for contourEnvoy as AKO doesn’t run as a pod in workload clusters created in a TKGS platform.
Step 10: Install TMC Self-Managed in Service TKC
This is already covered in Step 6 of the previous article Deploy the Tanzu Mission Control Self-Managed stack
And that’s it for this post. In the next post, I will demonstrate how to consume the TMC Self-managed stack for day-1/day-2 operations.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.