Welcome to yet another troubleshooting post for tmc self-managed operation in VCD. In the last post, I discussed the tmc self-managed deployment issue and how I fixed it. In this post, I will discuss another issue that I encountered with the solution.
After successfully deploying TMC Self-Managed, you must publish the solution to tenants so that they can attach their TKG clusters to TMC. When the publishing operation is performed, the TMC Self-Managed Add-On solution creates a temporary VM known as the solution agent vm, which is subsequently destroyed once the task is complete.
In my lab, the publishing task was completed for a couple of tenants and later when I tried publishing it to another tenant, the task got stuck (VCD was acting cranky at that time).
This behavior is encountered when the solutions process in the VCD cell gets killed or there is network interruption between the cell and the VCD public address during the operation execution.
My previous blog post discussed the VCD Extension for Tanzu Mission Control and covered the end-to-end deployment steps. In this post, I will cover how to troubleshoot a stuck TMC self-managed deployment in VCD.
I was deploying TMC self-managed in a new environment, and during configuration, I made a mistake by passing an incorrect value for the DNS zone, leading to a stuck deployment that did not terminate automatically. I waited for a couple of hours for the task to fail, but the task kept on running, thus preventing me from installing it with the correct configuration.
The deployment was stalled in the Creating phase and did not fail.
On checking the pods in the tmc-local namespace, a lot of them were stuck in either ‘CreateContainerConfigError” or “CrashLoopBackOff” states.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@jumpbox:~# kubectl get po -n tmc-local | grep CreateContainerConfigError
In VCD, when I checked the failed task ‘Execute global ‘post-create’ action,” I found the installer was complaining that the tmc package installation reconciliation failed.… Read More
VMware Tanzu Mission Control is a centralized hub for simplified, multi-cloud, multi-cluster Kubernetes management. It helps platform teams take control of their Kubernetes clusters with visibility across environments by allowing users to group clusters and perform operations, such as applying policies, on these groupings.
VMware launched Tanzu Mission Control Self-Managed last year for customers running their Kubernetes (Tanzu) platform in an air-gapped environment. TMC Self-Managed is designed to support organizations that prefer to maintain complete control over their multi-cluster management hub for Kubernetes to take full advantage of advanced capabilities for cluster configuration, policy management, data protection, etc.
Welcome to the Tanzu Mission Control Self-Managed series. So far in this series, I have covered the installation prerequisites and how to configure them. After that, I demonstrated the TMC-SM installation procedure on the TKGm platform. if you are not following along, you can read the earlier post of this series from the below links:
The installation procedure for TMC Self-Managed on a vSphere with Tanzu (aka TKGS) Kubernetes platform is a bit different and this post is focused on covering the required steps. Let’s get started.
I have used the following BOM in my lab
Software Components
Version
vSphere Namespace
1.24.9
VMware vSphere ESXi
7.0 U3n
VMware vCenter (VCSA)
7.0 U3n
VMware vSAN
7.0 U3n
NSX ALB
22.1.3
Make sure the following are already configured in your environment before attempting the installation:
This is the sixth blog post of the TMC Self-Managed blog series. In the previous post of this series, I showed how to configure the final prerequisite (Harbour registry) of the installation. If you are following along with me, you are now ready for the installation.
If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:
This blog post is focused on installing TMC Self-Managed on Tanzu Kubernetes Grid multi-cloud (TKGm). I will cover the installation procedure for TKGS (vSphere with Tanzu) in a separate post.
I have used the following BOM in my lab
Software Components
Version
Tanzu Kubernetes Grid
2.1.0
VMware vSphere ESXi
7.0 U3n
VMware vCenter (VCSA)
7.0 U3n
VMware vSAN
7.0 U3n
NSX Advanced LB
22.1.3
Step 1: Connect to the workload cluster where TMC Self-managed will be installed.… Read More
Here we are at the fifth post in our blog series. In this post, I’ll discuss how to download and prepare TMC Self-Managed artifacts for installation in a harbor registry.
If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:
Tanzu/Kubernetes supports a wide variety of image registry solutions including JFrog, Docker Hub, Amazon Elastic Container Registry, VMware Harbor, etc. for storing the application images that you deploy on the workload clusters. However, TMC Self-Managed only supports the Harbor image registry at the time of writing this post. The harbor registry must meet the following requirements:
A minimum storage of 20 GB is recommended for Harbor.
Welcome to Tanzu Mission Control Self-Managed Part 4 of the series. I’ll show you how to use cluster issuer and cert-manager for automatic certificate issuing in this post.
If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:
For its certificates, TMC Self-Managed uses cert-manager. You can use the cert-manager and cluster issuer to create a self-signed certificate for the installation in a lab or POC environment. On the workload cluster where TMC Self-Managed will be installed in my lab, I have installed cert-manager as a Tanzu package.
Welcome to Part 3 of the TMc Self-Managed series. Part 1 concentrated on a general introduction to TMC Self-Managed, while Part 2 dived into the DNS configuration. You may read the previous entries in this series if you missed them by clicking the links below.
Tanzu Mission Control Self-Managed manages user authentication using Pinniped Supervisor as the identity broker and requires an existing OIDC-compliant identity provider (IDP). Examples of OIDC-compliant IDPs are Okta, Keycloak, VMware Workspace One, etc. The Pinniped Supervisor expects the Issuer URL, client ID, and client secret to integrate with your IDP.
Note: This post demonstrates configuring Okta as an IDP. Although Okta is a SaaS service and is reachable over the internet, the intent is to show how you configure upstream IDP for authentication. In an airgap environment, you may use any IDP that doesn’t require an internet connection.… Read More
I covered the basic introduction of TMC Self-Managed, the general architecture, and the requirements your environment needs to meet before installing TMC Self-Managed in the first post of this series. Before getting your hands dirty with the installation, I’ll cover configuring DNS Zones and records in this post.
To enable correct traffic flow and access to various TMC endpoints, TMC Self-Managed needs a DNS zone to hold the DNS records. To ensure name resolution between the objects that are deployed during TMC Self-Managed installation, create the following A records in your DNS domain. You can create a new DNS zone or can leverage an existing zone.
alertmanager.<my-tmc-dns-zone>
auth.<my-tmc-dns-zone>
blob.<my-tmc-dns-zone>
console.s3.<my-tmc-dns-zone>
gts-rest.<my-tmc-dns-zone>
gts.<my-tmc-dns-zone>
landing.<my-tmc-dns-zone>
pinniped-supervisor.<my-tmc-dns-zone>
prometheus.<my-tmc-dns-zone>
s3.<my-tmc-dns-zone>
tmc-local.s3.<my-tmc-dns-zone>
tmc.<my-tmc-dns-zone>
To simplify the installation procedure, you can also use a wildcard DNS entry as shown below (for POCs only).
Record Type
Record Name
Value
A
*.<my-tmc-dns-zone>
load balancer IP
A
<my-tmc-dns-zone>
load balancer IP
The IP address for the above records must point to the external IP of the contour-envoy service that is deployed during the installation.… Read More
VMware Tanzu Mission Control is a SaaS offering available through VMware Cloud Services and provides:
A centralized platform to deploy and manage Kubernetes clusters across multiple clouds.
Attach existing Kubernetes Clusters in the TMC portal for centralized operations and management.
A Policy Engine that automates Access control and security policies across a fleet of clusters.
Manage security across multiple clusters.
Centralize authentication and authorization, with federated identity from multiple sources.
TMC SaaS cannot be used in specific environments because of compliance or data governance requirements. Industries like Banking, Health Care, and the Defence sector are usually running workloads in an air-gapped environment (dark site). Imagine running a large number of Kubernetes clusters without any central pane of glass to manage day-1 & day-2 operations across the clusters. VMware understood this pain and introduced Tanzu Mission Control Self-Managed (TMC-SM) as an installable product that you can deploy in your environment.
TMC Self-Managed can be installed in data centers, sovereign clouds, and service-provider environments.… Read More