VCD Object Storage Extension-Part 1: Introduction & Architecture

Recently, I got the chance to work on setting up Cloudian Object Storage for VMware Cloud Director and present some use cases of using object storage in conjunction with VCD. This blog series is aimed at jotting down all my learnings and mistakes that I encountered during setup.

In the first part of this series, let’s just understand what a VCD object storage extension solution is and how it works.

What is VCD Object Storage Extension?

VCD has evolved amazingly over the last couple of years, and many features, such as Container Service Extension, Data Protection (Veeam and Rubrik) integration, etc., were introduced.

The new addition to this portfolio was Object Storage which has become one of the key pillars of a modern cloud platform. Object Storage can now coexist with typical block storage or vSAN implementation with VCD. Tenants can use object storage to store cold data like vApp templates, media files, DB backups, etc.Read More

Install VCD Data Solutions Extension in an Airgap Environment

In my last post of the VCD series, I discussed the installation & Configuration of VCD Data Solutions Extensions. In this post, I will walk through configuring the same in an airgap environment.

In an airgap environment, artifacts are stored in an internal registry such as Harbor, Jfrog, etc. To install data solutions extension in the airgap environment, you must first relocate the artifacts from the VMware public registry to your internal registry.

Step 1: Relocate Artifacts

To relocate the artifacts, you can install a Linux machine with imgpkg and doker utilities.

Run the following commands to relocate artifacts:

Read More

How to Delete MQTT Enabled App Launchpad in VCD

Starting with VCD 10.2 and App Launchpad 2.0.0.1, it is possible to deploy App Launchpad using MQTT for communication with VCD.

VCD 10.5 introduced a new feature called Content Hub as a replacement for App Launchpad. Service providers running VCD 10.5.x are encouraged to provide container/vm applications to tenants by integrating Content Hub with VMware MarketPlace and Helm repositories.

In this post, I will demonstrate how you can delete MQTT enabled App Launchpad extension from VCD.

Step 1: List Installed Extensions

The GET call returns a json in response listing all installed extensions and its ID. From the extensions list filter the ID of the App Launchpad extension.Read More

Troubleshooting TMC Self-Managed Stuck Deployment in VCD

My previous blog post discussed the VCD Extension for Tanzu Mission Control and covered the end-to-end deployment steps. In this post, I will cover how to troubleshoot a stuck TMC self-managed deployment in VCD.

I was deploying TMC self-managed in a new environment, and during configuration, I made a mistake by passing an incorrect value for the DNS zone, leading to a stuck deployment that did not terminate automatically. I waited for a couple of hours for the task to fail, but the task kept on running, thus preventing me from installing it with the correct configuration.

The deployment was stalled in the Creating phase and did not fail.

On checking the pods in the tmc-local namespace, a lot of them were stuck in either ‘CreateContainerConfigError” or “CrashLoopBackOff” states.

In VCD, when I checked the failed task ‘Execute global ‘post-create’ action,” I found the installer was complaining that the tmc package installation reconciliation failed.Read More

Securing TKG Workloads with Antrea and NSX-Part 2: Enable Antrea Integration with NSX

In the first part of this series of blog posts, I talked about how VMware Container Networking with Antrea addresses current business problems associated with a Kubernetes CNI deployment. I also discussed the benefits that VMware NSX offers when Antrea is integrated with NSX.

In this post, I will discuss how to enable the integration between Antrea and NSX. 

Antrea-NSX Integration Architecture

The below diagram is taken from VMware blogs and shows the high-level architecture of Antrea and NSX integration.

The following excerpt from vmware blogs summarizes the above architecture pretty well.

Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs.

Read More

Securing TKG Workloads with Antrea and NSX-Part 1: Introduction

What is a Container Network Interface

Container Network Interface (CNI) is a framework for dynamically configuring networking resources in a Kubernetes cluster. CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods. Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.

There exists a wide variety of CNIs (Antrea, Calico, etc.) that can be used in a Kubernetes cluster. For more information on the supported CNIs, please read this article.

Business Challenges with Current K8s Networking Solutions

The top business challenges associated with current CNI solutions can be categorized as below:

  • Community support lacks predefined SLAs: Enterprises benefit from collaborative engineering and receive the latest innovations from open-source projects. However, it is a challenge for any enterprise to rely solely on community support to run its operations because community support is a best effort and cannot provide a predefined service-level agreement (SLA).
Read More

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Layer 7 Ingress in vSphere with Tanzu using NSX ALB

Introduction

vSphere with Tanzu currently doesn’t provide the AKO orchestration feature out-of-the-box. What I mean by this statement is that you can’t automate the deployment of AKO pods based on the cluster labels. There is no AkoDeploymentConfig that gets created when you enable workload management on a vSphere cluster and because of this, you don’t have anything running in your supervisor cluster to keep an eye on the cluster labels and take the decision of automated AKO installation in the workload clusters. 

However, this does not preclude you from using NSX ALB to provide layer-7 ingress for your workload clusters. AKO installation in a vSphere with Tanzu environment is done via helm charts and is a completely self-managed solution. You will be in charge of maintaining the AKO life cycle.

My Lab Setup

My lab’s bill of materials is shown below.

Component Version
NSX ALB (Enterprise) 20.1.7 
AKO 1.6.2
vSphere 7.0 U3c
Helm 3.7.4

The current setup of the NSX ALB is shown in the table below.Read More

Configuring L7 Ingress with NSX Advanced Load Balancer

NSX Advanced Load Balancer provides an L4+L7 load balancing using a Kubernetes operator (AKO) that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. AKO runs as a pod in Tanzu Kubernetes clusters and provides an Ingress controller and load balancing functionality. AKO remains in sync with the required Kubernetes objects and calls the NSX ALB Controller APIs to deploy the Ingresses and Services and place them on the Service Engines.

In this post, I will discuss implementing ingress control for a sample application and will see NSX ALB in action.

What is Kubernetes Ingress?

As per Kubernetes documentation:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

How do I implement NSX ALB as an ingress controller?

If you have deployed AKO via helm, the below parameters in the values.yamlRead More

Backing Up Stateful Applications using TMC Data Protection

Introduction

Kubernetes is frequently thought of as a platform for stateless workloads because the majority of its resources are ephemeral. However, as Kubernetes grows in popularity, enterprises are deploying more and more stateful apps. Because stateful workloads require permanent storage for application data, you can no longer simply reload them in the event of a disaster.

As businesses invest extensively in Kubernetes and deploy more and more containerized applications across multi-clouds, providing adequate data protection in a distributed environment becomes a challenge that must be addressed.

Data Protection in Tanzu Mission Control (TMC) is provided by Velero which is an open-source project. Velero backups typically include application and cluster data like config maps, custom resource definitions, secrets, and so on, which would then be re-applied to a cluster during restoration. The resources that are using a persistent volume, are backed up using Restic

In this post, I’ll show how to backup and recover a stateful application running in a Tanzu Kubernetes cluster.Read More