Securing TKG Workloads with Antrea and NSX-Part 2: Enable Antrea Integration with NSX

In the first part of this series of blog posts, I talked about how VMware Container Networking with Antrea addresses current business problems associated with a Kubernetes CNI deployment. I also discussed the benefits that VMware NSX offers when Antrea is integrated with NSX.

In this post, I will discuss how to enable the integration between Antrea and NSX. 

Antrea-NSX Integration Architecture

The below diagram is taken from VMware blogs and shows the high-level architecture of Antrea and NSX integration.

The following excerpt from vmware blogs summarizes the above architecture pretty well.

Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs.

Read More

Securing TKG Workloads with Antrea and NSX-Part 1: Introduction

What is a Container Network Interface

Container Network Interface (CNI) is a framework for dynamically configuring networking resources in a Kubernetes cluster. CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods. Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.

There exists a wide variety of CNIs (Antrea, Calico, etc.) that can be used in a Kubernetes cluster. For more information on the supported CNIs, please read this article.

Business Challenges with Current K8s Networking Solutions

The top business challenges associated with current CNI solutions can be categorized as below:

  • Community support lacks predefined SLAs: Enterprises benefit from collaborative engineering and receive the latest innovations from open-source projects. However, it is a challenge for any enterprise to rely solely on community support to run its operations because community support is a best effort and cannot provide a predefined service-level agreement (SLA).
Read More

Deploying TKG 2.0 Clusters in TKGS on vSphere 8 to Trust Private CA Certificates

In a vSphere with Tanzu environment, when you enable Workload Management, the Supervisor cluster that gets deployed operates as the management cluster. After supervisor cluster is deployed, you can provision two types of workload clusters

  • Tanzu Kubernetes clusters.
  • Clusters based on a ClusterClass (aka Classy Cluster). 

TKG on vSphere 8 provides different sets of APIs to deploy a TKC or a Classy cluster:

When you deploy a cluster using v1beta1 API, you get a Classy Cluster or TKG 2.0 cluster which is based on a default ClusterClass definition.

By default workload cluster don’t trust any self-signed certificates. Prior to TKG 2.0 clusters, the easiest way to make Tanzu Kubernetes Clusters (TKCs) to trust any self-signed CA certificate was to edit tkgserviceconfigurations and define your Trusted CAs there. This setting was then enforced on any newly deployed TKCs.

For TKG 2.0 clusters, the process has changed a bit and in this post I will walk through configuring the same.Read More