Native Kubernetes in VCD using Container Service Extension 3.0

Introduction

VMware Container Service is an extension to Cloud Director which enables VCD cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE integration with VCD has allowed CSPs to provide true developer-ready cloud offering to VCD tenants. Tenants can quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal. 

Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps. Once the K8 cluster is available, developers can use their native Kubernetes tooling to interact with the cluster.

To know more about the architecture and interaction of CSE components, please see my previous blog on this topic.

Container Service Extension 3.x went GA earlier this year and brought several new features/enhancements and one of them is supporting Tanzu Kubernetes Grid multi-cloud (TKGm) for K8 deployments and thus unlocking the full potential of consistent upstream Kubernetes in their VCD powered clouds.

To know more about what’s new in CSE 3.x, please see the VMware official announcement blog.

In this post, I will only be demonstrating the steps of enabling native Kubernetes in VCD. I have planned to publish a separate blog on TKGm integration in VCD. 

Deploy CSE Server

CSE Server can be deployed on any Linux OS with CSE python module and VCD CLI installed on it.

I have deployed a CentOS 7 VM with 1 vCPU, 6 GB RAM, and 100 GB storage for CSE Server installation in my environment. 

Step 1.1: Install Kubectl

To install kubectl in CentOS 7, run the following commands:

Step 1.2: Install Python & VCD CLI

To run vcd-cli command from any directory in CSE server, add the vcd-cli path to the .bash_profile. 

PATH=$PATH:$HOME/bin:/root/.local/bin

export PATH

# source .bash_profile

Verify that vcd-cli has been installed.

[root@cse ~]# vcd version
vcd-cli, VMware vCloud Director Command Line Interface, 24.0.1

Step 1.3: Install Container Service Extension

# pip3 install container-service-extension

Pip3 installs the CSE associated dependencies automatically during the installation. In case you run into issues with the installation of any dependent package, uninstall the problematic package and install the correct version. In my environment, pip was complaining about humanfriendly package. 

ERROR: container-service-extension 3.0.0 has requirement humanfriendly <5.0, >=4.8, but you’ll have humanfriendly 8.2 which is incompatible.

To fix the issues, I had to re-install the package with the correct version.

# pip uninstall humanfriendly

# pip install -Iv humanfriendly==4.18

Verify that CSE has been installed.

Enable CSE Client

After installation of the CSE server, if you try running vcd cse commands, you’ll encounter an error as shown below:

[root@cse ~]# vcd cse
Error: No such command ‘cse’.

This error means that the CSE client is not enabled in vcd-cli. To enable CSE client in vcd-cli, you have to edit the ~/.vcd-cli/profiles.yaml to include the following (at the end of the file):

A sample profiles.yaml page is shown below:



Note: If the profiles.yaml file is not present on your system, then run the following command to generate it first.

[root@cse ~]# vcd login vcd.manish.lab <Org-Name> <Org-User> -i -w

Verify that the cse client has been installed.

Create CSE Service Account

To facilitate the CSE Server interaction with VCD, create a user with CSE Service Role. The role has all the VCD rights that CSE needs to function.

To create this role, run the following command:

Note: The create-service-role command is only supported with VCD 10.2 and later. For older versions of VCD, manually create a user with system admin rights and use this user for CSE management. 

Login to the VCD portal and verify that the role has been created. 

Prepare VCD for CSE Installation

Before enabling CSE on VCD, ensure that you have configured the following items in VCD:

  • A dedicated organization and organization VDC for CSE. 
  • An Org VDC network connected to an external network (with internet connectivity).
  • Org VDC has sufficient storage to create vApps and publish them as templates.
  • Good network connectivity between the CSE Server and VCD to avoid intermittent failures in K8 templates upload/download operations.

You can create the above items directly from the VCD portal, or can leverage the below commands to do so:

Note: Ensure that the org network has a Static IP Pool or DHCP configured. This is very important as the CSE server deploys VMs on this network and install Kubernetes binaries from the internet. VCD should be able to assign an IP address to the VM during this process. 

On the safer side, you can manually deploy a test VM on the org network and verify you are able to hit the internet from the test VM. 

Enable CSE on VCD

Step 5.1: Generate CSE configuration file

The CSE server is controlled by a YAML file. You can generate a sample yaml file by running the following command:

[root@cse ~]# cse sample -o config.yaml

This command generates a config.yaml file which needs to be filled in with parameters specific to your environment. 

Important Note: VCD 10.1 onwards, AMQP is not a mandatory configuration as VCD have now an inbuilt MQTT message bus included. If you have already deployed RabbitMQ or a similar AMQP broker to work with VCD, you can use that in your configuration. To use MQTT with VCD, the minimum VCD api version required is 35.0. During the CSE installation phase, CSE will set up the MQTT exchange.

In my environment, I am not using AMQP and relying on MQTT.

A sample filled-out config.yaml command is shown below:

The config file has 5 mandatory sections ( amqp/mqtt], vcd, vcs, service, and, broker). To know about the parameters related to each section, please check the CSE official documentation.

Step 5.2: Create CSE encrypted config file

Starting with CSE 2.6.0, CSE server commands will accept only encrypted configuration files by default. Run the below command to generate the encrypted config file:

When an encrypted configuration file is used with CSE server CLI commands, CSE will prompt the user to provide the password to decrypt them. You can set the CSE_CONFIG_PASSWORD environment variable to disable the password prompt. This is really helpful when automating CSE installation via script. 

[root@cse ~]# CSE_CONFIG_PASSWORD=<Your Password>

[root@cse ~]# export CSE_CONFIG_PASSWORD

You also have to set the encrypted config file as read-only to all users except the root users.

[root@cse ~]# chmod 600 encrypted-config.yaml

Step 5.3: Install CSE in VCD

Run the following command to install CSE in VCD.

[root@cse ~]# cse install -c encrypted-config.yaml

During the installation process, the following events take place:

  1. CSE will download the Kubernetes templates from the URL specified in the encrypted configuration file.
  2. A VM from the downloaded ova is created in the CSE Org.
  3. CSE customizes the deployed VMs by installing a specific K8 version, Docker, etc.  
  4. The VMs are then exported as a template in the catalog defined in the CSE configuration file. 

 

CSE installation takes roughly 45 minutes to complete. 

Validate the CSE Installation by running the command: cse check

At the end of the installation process, you should see the below templates in the catalog created in CSE Org. 

Step 5.4: Run CSE Service

Once K8’s templates are installed in VCD, we can run the CSE server service by invoking the command:

[root@cse ~]# cse run -c encrypted-config.yaml

Controlling CSE service with systemctl

Create a script file as shown below

[root@cse ~]# vim cse.sh

[root@cse ~]# chmod +x cse.sh

Create cse.service file so that systemd can control it.

[root@cse ~]# vim /etc/systemd/system/cse.service

[root@cse ~]# systemctl enable cse

Created symlink from /etc/systemd/system/multi-user.target.wants/cse.service to /etc/systemd/system/cse.service.

[root@cse ~]# systemctl start cse

Now if we run the systemctl status cse command, we will see service is now running and since we have enabled this service, it will be persistent across reboot. 

Step 5.5: Register VCD Extension with CSE

Run the following commands to register VCD extension with CSE. 

Tenant Onboarding

In order to enable tenants for the Kubernetes cluster deployment, the service provider has to complete the following tasks:

Step 6.1: Enable Tenant VDC for Native K8 Deployment

By default when CSE 3.0.x is integrated with VCD 10.2, Kubernetes templates are restricted for use by tenants.

You can verify this by running the vcd cse ovdc list command and inspecting the K8s runtime column. As of now, it is showing empty. 

The provider has to explicitly enable the Tenant VDC for native K8 deployments, by running the following commands:

The vcd cse ovdc enable command publishes the native placement policy onto the chosen ovdc.

Step 6.2: Grant CSE Rights to the Tenant Users

Unlike CSE 2.6.x, native Kubernetes cluster operations are no longer gated by CSE API extension-specific rights in CSE 3.0.x.

Now a new right bundle cse:nativeCluster entitlement gets created in Cloud Director during CSE Server installation and this right bundle guards the native Kubernetes cluster operations. 

The Provider needs to grant the native Kubernetes cluster right bundles to the desired organizations and then grant the CSE rights to the Tenant Administrator role. The tenant administrator will then grant the cluster management rights to the desired tenant users.

Select the CSE Rights Bundle and click on Publish button and select the tenants to which CSE rights will be granted. 

Step 6.3: Publish Container UI Plugin to Tenants

Starting with VCD 10.2, the Container UI Plugin is available out of the box. The provider can publish it to the desired tenants to offer Kubernetes services.

To publish the plugin, login to VCD as a system admin user and navigate to More > Customize Portal, select the Container UI Plugin and click on the Publish button. 

Select the Tenants to which the plugin will be published.

The tenants can now see the Kubernetes Container Clusters option in the tenant portal. 

And now they can create a Native Kubernetes cluster in their VDC. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

One thought on “Native Kubernetes in VCD using Container Service Extension 3.0

  1. Pingback: Contenedores (Parte III) VMware Cloud Director   CSE – Virtualización Para TI

Leave a ReplyCancel reply