Table of Contents
Introduction
VMware Container Service is an extension to Cloud Director which enables VCD cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE integration with VCD has allowed CSPs to provide true developer-ready cloud offering to VCD tenants. Tenants can quickly deploy the Kubernetes cluster in just a few clicks directly from the VCD portal.
Cloud Providers upload customized Kubernetes templates in public catalogs which tenants leverage to deploy K8 clusters in self-contained vApps. Once the K8 cluster is available, developers can use their native Kubernetes tooling to interact with the cluster.
To know more about the architecture and interaction of CSE components, please see my previous blog on this topic.
Container Service Extension 3.x went GA earlier this year and brought several new features/enhancements and one of them is supporting Tanzu Kubernetes Grid multi-cloud (TKGm) for K8 deployments and thus unlocking the full potential of consistent upstream Kubernetes in their VCD powered clouds.
To know more about what’s new in CSE 3.x, please see the VMware official announcement blog.
In this post, I will only be demonstrating the steps of enabling native Kubernetes in VCD. I have planned to publish a separate blog on TKGm integration in VCD.
Deploy CSE Server
CSE Server can be deployed on any Linux OS with CSE python module and VCD CLI installed on it.
I have deployed a CentOS 7 VM with 1 vCPU, 6 GB RAM, and 100 GB storage for CSE Server installation in my environment.
Step 1.1: Install Kubectl
To install kubectl in CentOS 7, run the following commands:
1 2 3 4 5 6 7 8 9 10 11 |
[root@cse ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF [root@cse ~]# yum install -y kubectl |
Step 1.2: Install Python & VCD CLI
1 2 3 4 5 6 7 8 9 |
[root@cse ~]# yum install -y yum-utils [root@cse ~]# yum groupinstall -y development [root@cse ~]# yum -y install python38 python38-pip python38-devel [root@cse ~]# easy_install-3.8 pip [root@cse ~]# pip3 install --user vcd-cli |
To run vcd-cli command from any directory in CSE server, add the vcd-cli path to the .bash_profile.
PATH=$PATH:$HOME/bin:/root/.local/bin
export PATH
# source .bash_profile
Verify that vcd-cli has been installed.
[root@cse ~]# vcd version
vcd-cli, VMware vCloud Director Command Line Interface, 24.0.1
Step 1.3: Install Container Service Extension
# pip3 install container-service-extension
Pip3 installs the CSE associated dependencies automatically during the installation. In case you run into issues with the installation of any dependent package, uninstall the problematic package and install the correct version. In my environment, pip was complaining about humanfriendly package.
ERROR: container-service-extension 3.0.0 has requirement humanfriendly <5.0, >=4.8, but you’ll have humanfriendly 8.2 which is incompatible.
To fix the issues, I had to re-install the package with the correct version.
# pip uninstall humanfriendly
# pip install -Iv humanfriendly==4.18
Verify that CSE has been installed.
1 2 |
[root@cse ~]# cse version CSE, Container Service Extension for VMware vCloud Director, version 3.0.3 |
Enable CSE Client
After installation of the CSE server, if you try running vcd cse commands, you’ll encounter an error as shown below:
[root@cse ~]# vcd cse
Error: No such command ‘cse’.
This error means that the CSE client is not enabled in vcd-cli. To enable CSE client in vcd-cli, you have to edit the ~/.vcd-cli/profiles.yaml to include the following (at the end of the file):
1 2 |
extensions: - container_service_extension.client.cse |
A sample profiles.yaml page is shown below:
Note: If the profiles.yaml file is not present on your system, then run the following command to generate it first.
[root@cse ~]# vcd login vcd.manish.lab <Org-Name> <Org-User> -i -w
Verify that the cse client has been installed.
1 2 |
[root@cse ~]# vcd cse version CSE, Container Service Extension for VMware vCloud Director, version 3.0.3 |
Create CSE Service Account
To facilitate the CSE Server interaction with VCD, create a user with CSE Service Role. The role has all the VCD rights that CSE needs to function.
To create this role, run the following command:
1 2 3 4 5 6 7 8 |
[root@cse ~]# cse create-service-role vcd.manish.lab -s Username for System Administrator: admin Password for admin: Connecting to vCD: vcd.manish.lab Connected to vCD as system administrator: admin Creating CSE Service Role... Successfully created CSE Service Role |
Note: The create-service-role command is only supported with VCD 10.2 and later. For older versions of VCD, manually create a user with system admin rights and use this user for CSE management.
Login to the VCD portal and verify that the role has been created.
Prepare VCD for CSE Installation
Before enabling CSE on VCD, ensure that you have configured the following items in VCD:
- A dedicated organization and organization VDC for CSE.
- An Org VDC network connected to an external network (with internet connectivity).
- Org VDC has sufficient storage to create vApps and publish them as templates.
- Good network connectivity between the CSE Server and VCD to avoid intermittent failures in K8 templates upload/download operations.
You can create the above items directly from the VCD portal, or can leverage the below commands to do so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Create CSE Org [root@cse ~]# vcd org create --enabled <CSE-Org-Name> Switch to org and create VDC under it. # vcd org use <CSE-Org-Name> [root@cse ~]# vcd vdc create <CSE-VDC-Name> --provider-vdc <PVDC-Name> --allocation-model 'AllocationVApp' --storage-profile <Storage-Profile-Name> Switch to the newly created VDC and add an org network for outbound connectivity. It's assumed that an external network already exists in VCD. [root@cse ~]# vcd vdc use <CSE-VDC-Name> [root@cse ~]# vcd network direct create <Org-N/W-Name> --parent <Externel N/w Name> --shared |
Note: Ensure that the org network has a Static IP Pool or DHCP configured. This is very important as the CSE server deploys VMs on this network and install Kubernetes binaries from the internet. VCD should be able to assign an IP address to the VM during this process.
On the safer side, you can manually deploy a test VM on the org network and verify you are able to hit the internet from the test VM.
Enable CSE on VCD
Step 5.1: Generate CSE configuration file
The CSE server is controlled by a YAML file. You can generate a sample yaml file by running the following command:
[root@cse ~]# cse sample -o config.yaml
This command generates a config.yaml file which needs to be filled in with parameters specific to your environment.
Important Note: VCD 10.1 onwards, AMQP is not a mandatory configuration as VCD have now an inbuilt MQTT message bus included. If you have already deployed RabbitMQ or a similar AMQP broker to work with VCD, you can use that in your configuration. To use MQTT with VCD, the minimum VCD api version required is 35.0. During the CSE installation phase, CSE will set up the MQTT exchange.
In my environment, I am not using AMQP and relying on MQTT.
A sample filled-out config.yaml command is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
# Only one of the amqp or mqtt sections should be present. #amqp: # exchange: cse-ext # host: amqp.vmware.com # password: guest # port: 5672 # prefix: vcd # routing_key: cse # username: guest # vhost: / mqtt: verify_ssl: false vcd: api_version: '35.0' host: vcd.manish.lab log: true password: VMware1! port: 443 username: admin verify: false vcs: - name: compute-vc01.manish.lab password: VMware1! username: administrator@vsphere.local verify: false service: enforce_authorization: false log_wire: false processors: 15 telemetry: enable: true broker: catalog: CSE-Catalog default_template_name: photon-v2_k8-1.14_weave-2.5.2 default_template_revision: 3 ip_allocation_mode: pool network: CSE-Prod org: CSE remote_template_cookbook_url: http://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template.yaml storage_profile: 'vSAN-Default' vdc: CSE-VDC |
The config file has 5 mandatory sections ( amqp/mqtt], vcd, vcs, service, and, broker). To know about the parameters related to each section, please check the CSE official documentation.
Step 5.2: Create CSE encrypted config file
Starting with CSE 2.6.0, CSE server commands will accept only encrypted configuration files by default. Run the below command to generate the encrypted config file:
1 2 3 4 5 6 |
[root@cse ~]# cse encrypt config.yaml --output encrypted-config.yaml Required Python version: >= 3.7.3 Installed Python version: 3.8.6 (default, Jan 29 2021, 17:38:16) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] Password for config file encryption: Encryption successful. |
When an encrypted configuration file is used with CSE server CLI commands, CSE will prompt the user to provide the password to decrypt them. You can set the CSE_CONFIG_PASSWORD environment variable to disable the password prompt. This is really helpful when automating CSE installation via script.
[root@cse ~]# CSE_CONFIG_PASSWORD=<Your Password>
[root@cse ~]# export CSE_CONFIG_PASSWORD
You also have to set the encrypted config file as read-only to all users except the root users.
[root@cse ~]# chmod 600 encrypted-config.yaml
Step 5.3: Install CSE in VCD
Run the following command to install CSE in VCD.
[root@cse ~]# cse install -c encrypted-config.yaml
During the installation process, the following events take place:
- CSE will download the Kubernetes templates from the URL specified in the encrypted configuration file.
- A VM from the downloaded ova is created in the CSE Org.
- CSE customizes the deployed VMs by installing a specific K8 version, Docker, etc.
- The VMs are then exported as a template in the catalog defined in the CSE configuration file.
CSE installation takes roughly 45 minutes to complete.
Validate the CSE Installation by running the command: cse check
At the end of the installation process, you should see the below templates in the catalog created in CSE Org.
Step 5.4: Run CSE Service
Once K8’s templates are installed in VCD, we can run the CSE server service by invoking the command:
[root@cse ~]# cse run -c encrypted-config.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
--- snipped output--- Validating CSE installation according to config file AMQP exchange 'cse-exchange' exists CSE on vCD is currently enabled Found catalog 'CSE' CSE installation is valid Started thread 'MessageConsumer-0 (140157509945088)' Started thread 'MessageConsumer-1 (140157501552384)' Started thread 'MessageConsumer-2 (140157493159680)' Started thread 'MessageConsumer-3 (140157484766976)' Started thread 'MessageConsumer-4 (140157476374272)' Container Service Extension for vCloud Director Server running using config file: /root/encrypted-config.yaml Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log waiting for requests (ctrl+c to close) |
Controlling CSE service with systemctl
Create a script file as shown below
[root@cse ~]# vim cse.sh
1 2 3 4 5 |
#!/usr/bin/env bash export CSE_CONFIG_PASSWORD='VMware1!' /root/.local/bin/cse run -c /root/encrypted-config.yaml |
[root@cse ~]# chmod +x cse.sh
Create cse.service file so that systemd can control it.
[root@cse ~]# vim /etc/systemd/system/cse.service
1 2 3 4 5 6 7 8 9 10 |
[Service] ExecStart=/bin/sh /root/cse.sh Type=simple User=root WorkingDirectory=/root Restart=always [Install] WantedBy=multi-user.target # systemctl daemon-reload |
[root@cse ~]# systemctl enable cse
Created symlink from /etc/systemd/system/multi-user.target.wants/cse.service to /etc/systemd/system/cse.service.
[root@cse ~]# systemctl start cse
Now if we run the systemctl status cse command, we will see service is now running and since we have enabled this service, it will be persistent across reboot.
Step 5.5: Register VCD Extension with CSE
Run the following commands to register VCD extension with CSE.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
[root@cse ~]# vcd system extension create cse cse cse vcdext '/api/cse, /api/cse/.*, /api/cse/.*/.*' Extension registered. [root@cse ~]# vcd system extension list enabled exchange isAuthorizationEnabled name namespace priority routingKey --------- ---------- ------------------------ ------ ----------- ---------- ------------ true vcdext false cse cse 0 cse [root@cse ~]# vcd system extension info cse property value ---------------------- ------------------------------------ enabled true exchange vcdext filter_1 /api/cse/.* filter_2 /api/cse/.*/.* filter_3 /api/cse id 9ebfdb46-bddd-4536-b6f8-31c9a2919984 isAuthorizationEnabled false name cse namespace cse priority 0 routingKey cse |
Tenant Onboarding
In order to enable tenants for the Kubernetes cluster deployment, the service provider has to complete the following tasks:
Step 6.1: Enable Tenant VDC for Native K8 Deployment
By default when CSE 3.0.x is integrated with VCD 10.2, Kubernetes templates are restricted for use by tenants.
You can verify this by running the vcd cse ovdc list command and inspecting the K8s runtime column. As of now, it is showing empty.
1 2 3 4 5 |
# vcd cse ovdc list Name ID K8s Runtime -------- ------------------------------------ ------------- MJ-VDC01 d7e583bc-6161-4857-ae7b-2db1f5717fff [] CSE-VDC c9b8c95f-de4c-4141-85e4-fc19455e4121 [] |
The provider has to explicitly enable the Tenant VDC for native K8 deployments, by running the following commands:
1 2 3 |
# vcd cse ovdc enable -o vStellar MJ-VDC01 -n OVDC Update: Updating OVDC placement policies task: 5c88d4f3-94f1-4e26-9221-635c891ed2d3, Operation success, result: success |
The vcd cse ovdc enable command publishes the native placement policy onto the chosen ovdc.
1 2 3 4 5 |
# vcd cse ovdc list Name ID K8s Runtime -------- ------------------------------------ ------------- CSE-VDC dced2924-559f-4924-82bd-b5154b5c625d [] MJ-VDC01 9cb0215c-f27d-4079-b403-c881854aa045 ['native'] |
Step 6.2: Grant CSE Rights to the Tenant Users
Unlike CSE 2.6.x, native Kubernetes cluster operations are no longer gated by CSE API extension-specific rights in CSE 3.0.x.
Now a new right bundle cse:nativeCluster entitlement gets created in Cloud Director during CSE Server installation and this right bundle guards the native Kubernetes cluster operations.
The Provider needs to grant the native Kubernetes cluster right bundles to the desired organizations and then grant the CSE rights to the Tenant Administrator role. The tenant administrator will then grant the cluster management rights to the desired tenant users.
Select the CSE Rights Bundle and click on Publish button and select the tenants to which CSE rights will be granted.
Step 6.3: Publish Container UI Plugin to Tenants
Starting with VCD 10.2, the Container UI Plugin is available out of the box. The provider can publish it to the desired tenants to offer Kubernetes services.
To publish the plugin, login to VCD as a system admin user and navigate to More > Customize Portal, select the Container UI Plugin and click on the Publish button.
Select the Tenants to which the plugin will be published.
The tenants can now see the Kubernetes Container Clusters option in the tenant portal.
And now they can create a Native Kubernetes cluster in their VDC.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.
Pingback: Contenedores (Parte III) VMware Cloud Director CSE – Virtualización Para TI