VCD Container Service Extension Series-Part 2: CSE Server Installation

In first Post of this series, I talked about high level architecture of CSE infrastructure. I also discussed about various components that makes up the CSE platform. In this post I will walk through steps of installing & configuring CSE server.

CSE Installation Prerequisites

Before starting with CSE server installation, make sure following requirements are met:

1: VCD installed & configured: For Lab/POC environment, single node VCD installation is sufficient. For production environment 3 or more nodes (configured behind lb) is recommended.

2: Organization & Catalog for CSE: Dedicated Org created in VCD for CSE consumption. This org should have a Routed Org Network which has outbound connectivity to internet. Also this org should have a catalog created in advance. This catalog holds the K8’s ready vApp templates and will be shared to tenants for consumption.

3: AMQP broker configured in VCD: To extend VCD Public API, AMQP broker needs to be configured beforehand. Read More

VCD (10.5) Service Crashing Continuously in CSE Environment

After updating my lab’s Container Service Extension to version 4.2.0, I observed that the VMware VCD service was frequently crashing. Restarting the cell service did not help much, as the VCD user interface (UI) died again after five minutes. The cell.log was throwing below exception

You will find similar log entries in the cell-runtime.log file.

Read More

HCX Service Mesh Operations via API

Before deploying Service Mesh, we need to create Compute & Network Profiles in both source & destination HCX environment. The order of Service Mesh deployment is as follows:

  1. Create Network Profiles in source & destination HCX.
  2. Create Compute Profiles in source & destination HCX.
  3. Trigger Service Mesh deployment from on-prem (source) site.

In this post I will demonstrate HCX Service Mesh operations via API.

Some of the most common operation associated with service mesh can be:

  1. Create profiles (Network & Compute) and deploy service mesh.
  2. Update Network & Compute profiles to include/remove additional features.
  3. Delete Network & Compute profiles.
  4. Update Service Mesh to include/remove additional services.
  5. Delete Service Mesh. 

Let’s jump into lab and look at these operations in action.

Network Profile API

1: List Network Profiles: This API call list all the networks profiles that you have created to be used in service mesh. 

 

Note: Here objectId is the id of the various networks participating in network profiles

2: List Specific Network profile: This API call lists a specific network profile.Read More

vRealize Automation- Creating a Service

Self-Service catalog is a new way of managing Catalog Items. Service categories organizes catalog items into related offerings to enable end users to browse catalog items they needed in an easier and convenient way.

In vRA Blueprints are published which enables them to be assigned to users and groups through the catalog management components of the vRA Service Catalog. In earlier version of vCAC Blueprints were assigned to groups within the Blueprint itself.

To enable blueprints to be available in the catalog we first need a service that we can publish them to. Services are the containers that hold the actual catalog items that can be requested.

We must have at least one service in the environment to enable our catalog items against.

If you have missed earlier posts of this series then you can access the same by clicking on below links:

1: Introduction to vCAC(vRA)

2: Installing and Configuring vRA Identity Appliance

3: Installing and Configuring vRA Appliance

4: Installing and configuring IaaS Components

5: Creating Tenants

6: Adding vSphere Endpoints

7: Creating and Configuring Fabric Groups

8: Creating Business Groups and Reservation

9: Creating and Publishing Blueprints

In this post we will create a new service for the catalogs.… Read More

Install & Configure VMware Cloud Director Extension for VMware Data Solutions 1.3

What is VCD Data Solution?

The VMware Cloud Director Extension for VMware Data Solutions is a plug-in for VCD that allows cloud providers to offer on-demand caching, messaging, and database software services at scale and thus expand their multi-tenant cloud infrastructure platform. The VCD Data Solutions include services such as VMware SQL with MySQL, VMware SQL with PostgreSQL, and RabbitMQ.

These services are deployed on top of the Kubernetes clusters deployed using Container Service Extension. Tenants can install Grafana and Prometheus in their Kubernetes clusters to perform data analytics, monitor a service’s health, and take action if an issue occurs.

In this post, I will walk through the steps of installing & Configuring VCD Data Solution Extension version 1.3

How does the Data Solutions Extension work?

The Data Solution Extension works in conjunction with Container Service Extensions 4.0 or later. It enables providers to publish data and messaging services to their tenants, who can then use them to build new or update current applications.Read More

Simplify Your Application Deployments with VCD Content Hub

Introduction

Over the last few years, VCD has evolved as a true developer ready cloud. To start with, VCD enabled Service Providers to offer multi-tenant/multi-cluster Kubernetes as-a-Service through Container Service Extension and lately enabled integration with Tanzu Mission Control to simplify the Kubernetes management and visibility across environments through a single pane of glass.

Software as a Service (SaaS) has emerged as a game-changer, offering a flexible and scalable approach to software delivery that aligns perfectly with the demands of modern businesses. To cater to this need, VCD integrates with the App Launchpad service that offers a self-service portal to tenants to deploy and manage their applications easily. It allows users to deploy and manage applications on top of the infrastructure provisioned through the VCD portal and provides a user-friendly interface for application provisioning. 

The main challenge with App Launchpad was the need for administrators to handle catalog items individually, resulting in increased overhead.Read More

TKG Cluster Deployment Gotchas with Node Health Check in CSE 4.2

Recently, I upgraded Container Service Extension to 4.2.0 in my lab and was trying to deploy a TKG 2.4.0 cluster with node health check enabled. The deployment got stuck after deploying one control plane and worker node, and the cluster went into an error state.

Clicking on the Events tab showed the following error:

I checked the CSE log file and the capvcd logs on the ephemeral vm (before it got deleted) and found no error that would make sense to me.

I contacted CSE Engineering to discuss this issue and opened a bug for further analysis of the logs.

Root Cause

CSE Engineering debugged the logs and found that it was a bug in the product version. Here is the summary of the analysis done by Engineering. 

Read More

How to Integrate TMC Self-Managed 1.0 with VCD

Introduction

VMware Tanzu Mission Control is a centralized hub for simplified, multi-cloud, multi-cluster Kubernetes management. It helps platform teams take control of their Kubernetes clusters with visibility across environments by allowing users to group clusters and perform operations, such as applying policies, on these groupings.

VMware launched Tanzu Mission Control Self-Managed last year for customers running their Kubernetes (Tanzu) platform in an air-gapped environment. TMC Self-Managed is designed to support organizations that prefer to maintain complete control over their multi-cluster management hub for Kubernetes to take full advantage of advanced capabilities for cluster configuration, policy management, data protection, etc.

The first couple of releases of TMC Self-Managed only supported TKG clusters that were running on vSphere. Last month, VMware announced the release of the VMware Cloud Director Extension for Tanzu Mission Control, which allows installing TMC Self-Managed in a VCD environment to manage TKG clusters deployed through the VCD Container Service Extension (CSE).Read More

Unable to delete TKGm clusters in VCD

I encountered an issue while playing with Container Service Extension 3.1.1 in my lab where I was unable to construct TKGm clusters. During troubleshooting, I discovered that the Rights Bundle “cse:nativeCluster Entitlement” was missing certain critical rights that are newly added with CSE 3.1.1.

On attempting to delete the failed clusters, the clusters stuck in the state “DELETE:IN_PROGRESS”.

On attempting to delete the failed cluster via vcd-cli, the operation failed with the error “RDE_ENTITY_NOT_RESOLVED

Read More

Tanzu Kubernetes Grid multi-cloud Integration with VCD – Greenfield Installation

With the release of Container Service Extension 3.0.3, service providers can integrate Tanzu Kubernetes Grid multi-cloud (TKGm) with VCD to offer Kubernetes as a Service to their tenants. TKGm integration in addition to existing support for Native K8 and vSphere with Tanzu (TKGS) has truly transformed VCD into a developer-ready cloud. 

With Tanzu Basic (TKGm &TKGS) on VCD, tenants have a choice of deploying K8s in three different ways: 

  • TKGS:  K8 deployment on vSphere 7 which requires vSphere Pod Service 
  • TKGm: Multi-tenant K8 deployments that do not need vSphere Pod Service. 
  • Native K8: Community supported Kubernetes on VCD with CSE 

By offering multi-tenant managed Kubernetes services with Tanzu Basic and VCD, Cloud providers can attract developer workloads starting with test/dev environments to their cloud. Once developers have grown confidence in the K8 solution, application owners can leverage the VCD-powered clouds to quickly deploy test/dev K8s clusters on-premise and accelerate their cloud-native app development and transition to production environments.Read More