Securing TKG Workloads with Antrea and NSX-Part 2: Enable Antrea Integration with NSX

In the first part of this series of blog posts, I talked about how VMware Container Networking with Antrea addresses current business problems associated with a Kubernetes CNI deployment. I also discussed the benefits that VMware NSX offers when Antrea is integrated with NSX.

In this post, I will discuss how to enable the integration between Antrea and NSX. 

Antrea-NSX Integration Architecture

The below diagram is taken from VMware blogs and shows the high-level architecture of Antrea and NSX integration.

The following excerpt from vmware blogs summarizes the above architecture pretty well.

Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs.

The Antrea Controller which is watching these policies run the relevant computation and sends the results to the individual Antrea Agents for enforcement. As these policies are run, statistics are reported back to the Antrea Controller for aggregation. Because the cluster is integrated with NSX-T, these aggregated values are reported to NSX-T adapter, which in turn reports back to NSX-T and made available to the admin.

Integration Prerequisites

  1. TKG management and workload clusters are deployed and are in a running state.
  2. Tanzu CLI is installed on the machine from where you are managing the TKG clusters.
  3. NSX is licensed with NSX Data Center Advanced or Enterprise Plus.

Bill of Materials.

The below table lists the software and its corresponding version that I am running in my lab.

Component Version
vCenter Server 7.0 u3n
ESXi 7.0 u3n
NSX 3.2.3
TKGm 2.3.0
Antrea Controller 1.11.1
VMware Container Networking with Antrea 1.7.0

In my TKG cluster, I have a standalone management cluster and one workload cluster with multiple control planes and worker nodes for high availability.

TKG was deployed using default settings, so it’s running Antrea Controller and Agents by default.

Integrate Antrea with NSX

Step 1: Create a self-signed Certificate for the Antrea Container Cluster.

A self-signed certificate is required to create a principal identity user account in NSX which facilitates the integration between Antrea and NSX. 

Keep the .key and .crt files safe as you need them in the next steps.

Step 2: Create a Principal Identity User

Antrea adapters communicate with NSX using a Principal Identity user. Principle Identity users are certificate-based accounts in NSX. Each Antrea container cluster requires a different PI user.

Important: The cluster name must be unique in NSX. The certificate common name and the PI user name must be the same as the cluster name. NSX does not support sharing certificates and PI users between clusters. 

2.1: In the NSX UI navigate to the System > User Management > User Role Assignment tab and click on the Add button and select the Principal Identity with Role option.

2.2: For the user/group name and node ID, use the same name as your Antrea container cluster. Select the Enterprise Admin role and add the content of the .crt file (generated in Step 1) to the Certificate PEM.

Step 3: Download and Deploy Antrea Interworking Adapter Pods

Important: Before downloading the adapter image and manifests, please check the Antrea controller version running in your Kubernetes cluster and check the VMware Container Networking with Antrea release notes to find the right version of the image file to be used.

For example: Antrea Container Networking with Antrea v1.7.0 is based on the Antrea v1.11.1 open-source release.

To check the version of Antrea Controller version, you can run the commands as shown below:

Login to the VMware Customer Connect Portal and download the Interworking adapter image and Manifest files under the section “Antrea – NSX Interworking Adapter Image and Deployment Manifests

On extracting the downloaded zip file, you get the following files:

  • interworking.yaml: YAML deployment manifest file to register an Antrea container cluster to NSX. This file also generates a configmap which contains NSX credentials and a secret that is needed to connect interworking pods to NSX Manager. 
  • bootstrap-config. yaml: Contains the image location and other deployment related info for the interworking pods. 
  • deregisterjob.yaml: YAML manifest file to deregister an Antrea container cluster from NSX.
  • interworking-version.tar: Archive file for the container images of the Management Plane Adapter and Central Control Plane Adapter.

3.1: Edit the bootstrap-config.yaml file and specify the following:

  • clusterName: Name of the Antrea cluster. This name is visible in the NSX UI/API and identifies the Antrea cluster.
  • NSXManagers: The IP addresses or FQDNs of the NSX Managers of an NSX instance. An NSX instance can have one NSX Manager or a cluster of three NSX Managers.
  • tls.crt: # One line base64 encoded data. This can be generated by the command: cat tls.crt | base64 -w 0
  • tls.key: # One line base64 encoded data. This can be generated by the command: cat tls.key | base64 -w 0

Note: You have generated the .crt and .key files in Step 1 already.

A sample config file is shown below for the reference:



3.2: Edit the Interworking.yaml to point to the right images.

In my example, I have pointed out all image references to projects.registry.vmware.com/antreainterworking/interworking-photon:0.11.1

Note that there are 5 references for the image field. You have to update each one of them.

3.3: Apply the yaml files to initiate the Antrea integration with NSX

Command: kubectl apply -f bootstrap-config.yaml -f interworking.yaml



The above yamls create the vmware-system-antrea namespace and deploy the register and interworking pods are deployed in the same namespace.

3.4 Check the status of the pods

Step 4: Validate Antrea Integration in NSX

In the NSX Manager UI navigate to the Inventory > Containers > Clusters page and verify that you see your Antrea cluster listed there.

You ought to be able to see the Namespaces with CNI Type Antrea being reported back to the NSX Manager from the Kubernetes cluster under the Namespaces page.

Nodes can be seen in a similar way under System > Fabric > Nodes > Container Clusters page. This demonstrates successful inventory polling in NSX Manager using the Antrea NSX adapter installed on the Kubernetes cluster.

And that’s it for this post. In the next part of this blog series, I will showcase how to use the advanced features of Antrea and how to enforce network policies.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

Leave a ReplyCancel reply