Table of Contents
In the first part of this series of blog posts, I talked about how VMware Container Networking with Antrea addresses current business problems associated with a Kubernetes CNI deployment. I also discussed the benefits that VMware NSX offers when Antrea is integrated with NSX.
In this post, I will discuss how to enable the integration between Antrea and NSX.
Antrea-NSX Integration Architecture
The below diagram is taken from VMware blogs and shows the high-level architecture of Antrea and NSX integration.
The following excerpt from vmware blogs summarizes the above architecture pretty well.
Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs.
The Antrea Controller which is watching these policies run the relevant computation and sends the results to the individual Antrea Agents for enforcement. As these policies are run, statistics are reported back to the Antrea Controller for aggregation. Because the cluster is integrated with NSX-T, these aggregated values are reported to NSX-T adapter, which in turn reports back to NSX-T and made available to the admin.
Integration Prerequisites
- TKG management and workload clusters are deployed and are in a running state.
- Tanzu CLI is installed on the machine from where you are managing the TKG clusters.
- NSX is licensed with NSX Data Center Advanced or Enterprise Plus.
Bill of Materials.
The below table lists the software and its corresponding version that I am running in my lab.
Component | Version |
vCenter Server | 7.0 u3n |
ESXi | 7.0 u3n |
NSX | 3.2.3 |
TKGm | 2.3.0 |
Antrea Controller | 1.11.1 |
VMware Container Networking with Antrea | 1.7.0 |
In my TKG cluster, I have a standalone management cluster and one workload cluster with multiple control planes and worker nodes for high availability.
1 2 3 4 5 6 7 |
[root@bootstrap ~]# tanzu mc get NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR tkg-mgmt tkg-system running 3/3 3/3 v1.26.5+vmware.2 management prod v1.26.5---vmware.2-tkg.1 [root@bootstrap ~]# tanzu cluster list NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR svctkc default running 3/3 3/3 v1.26.5+vmware.2 <none> prod v1.26.5---vmware.2-tkg.1 |
TKG was deployed using default settings, so it’s running Antrea Controller and Agents by default.
1 2 3 4 5 6 7 8 9 |
[root@bootstrap ~]# kubectl get po -A | grep antrea kube-system antrea-agent-564vr 2/2 Running kube-system antrea-agent-bnmnl 2/2 Running kube-system antrea-agent-dsrtg 2/2 Running kube-system antrea-agent-vwbl8 2/2 Running kube-system antrea-agent-xlgp6 2/2 Running kube-system antrea-agent-xln7f 2/2 Running kube-system antrea-controller-9bb9bb6b5-dc58h 1/1 Running |
Integrate Antrea with NSX
Step 1: Create a self-signed Certificate for the Antrea Container Cluster.
A self-signed certificate is required to create a principal identity user account in NSX which facilitates the integration between Antrea and NSX.
1 2 3 4 5 |
# openssl genrsa -out svctkc.key 2048 # openssl req -new -key svctkc.key -out svctkc.csr -subj "/C=IN/ST=KA/L=Bengaluru/O=VMware/OU=Tanzu-Solution-Engineering/CN=svctkc" # openssl x509 -req -days 3650 -sha256 -in svctkc.csr -signkey svctkc.key -out svctkc.crt |
Keep the .key and .crt files safe as you need them in the next steps.
Step 2: Create a Principal Identity User
Antrea adapters communicate with NSX using a Principal Identity user. Principle Identity users are certificate-based accounts in NSX. Each Antrea container cluster requires a different PI user.
Important: The cluster name must be unique in NSX. The certificate common name and the PI user name must be the same as the cluster name. NSX does not support sharing certificates and PI users between clusters.
2.1: In the NSX UI navigate to the System > User Management > User Role Assignment tab and click on the Add button and select the Principal Identity with Role option.
2.2: For the user/group name and node ID, use the same name as your Antrea container cluster. Select the Enterprise Admin role and add the content of the .crt file (generated in Step 1) to the Certificate PEM.
Step 3: Download and Deploy Antrea Interworking Adapter Pods
Important: Before downloading the adapter image and manifests, please check the Antrea controller version running in your Kubernetes cluster and check the VMware Container Networking with Antrea release notes to find the right version of the image file to be used.
For example: Antrea Container Networking with Antrea v1.7.0 is based on the Antrea v1.11.1 open-source release.
To check the version of Antrea Controller version, you can run the commands as shown below:
1 2 3 4 5 6 7 8 9 |
# kubectl get pod -n kube-system -l component=antrea-controller NAME READY STATUS RESTARTS AGE antrea-controller-64b8c8656b-c84l2 1/1 Running 0 4d2h # kubectl exec -it antrea-controller-64b8c8656b-c84l2 -n kube-system -- antctl version antctlVersion: v1.11.1-4776f66 controllerVersion: v1.11.1-4776f66 |
Login to the VMware Customer Connect Portal and download the Interworking adapter image and Manifest files under the section “Antrea – NSX Interworking Adapter Image and Deployment Manifests“
On extracting the downloaded zip file, you get the following files:
- interworking.yaml: YAML deployment manifest file to register an Antrea container cluster to NSX. This file also generates a configmap which contains NSX credentials and a secret that is needed to connect interworking pods to NSX Manager.
- bootstrap-config. yaml: Contains the image location and other deployment related info for the interworking pods.
- deregisterjob.yaml: YAML manifest file to deregister an Antrea container cluster from NSX.
- interworking-version.tar: Archive file for the container images of the Management Plane Adapter and Central Control Plane Adapter.
3.1: Edit the bootstrap-config.yaml file and specify the following:
- clusterName: Name of the Antrea cluster. This name is visible in the NSX UI/API and identifies the Antrea cluster.
- NSXManagers: The IP addresses or FQDNs of the NSX Managers of an NSX instance. An NSX instance can have one NSX Manager or a cluster of three NSX Managers.
- tls.crt: # One line base64 encoded data. This can be generated by the command: cat tls.crt | base64 -w 0
- tls.key: # One line base64 encoded data. This can be generated by the command: cat tls.key | base64 -w 0
Note: You have generated the .crt and .key files in Step 1 already.
A sample config file is shown below for the reference:
3.2: Edit the Interworking.yaml to point to the right images.
In my example, I have pointed out all image references to projects.registry.vmware.com/antreainterworking/interworking-photon:0.11.1
Note that there are 5 references for the image field. You have to update each one of them.
3.3: Apply the yaml files to initiate the Antrea integration with NSX
Command: kubectl apply -f bootstrap-config.yaml -f interworking.yaml
The above yamls create the vmware-system-antrea namespace and deploy the register and interworking pods are deployed in the same namespace.
3.4 Check the status of the pods
1 2 3 4 5 6 7 8 9 10 |
# kubectl -n vmware-system-antrea get pods NAME READY STATUS RESTARTS AGE interworking-5cfb9d8868-6dbxx 4/4 Running 0 24s register-7b6j6 0/1 Completed 0 24s # kubectl -n vmware-system-antrea get jobs NAME COMPLETIONS DURATION AGE register 1/1 6s 30s |
Step 4: Validate Antrea Integration in NSX
In the NSX Manager UI navigate to the Inventory > Containers > Clusters page and verify that you see your Antrea cluster listed there.
You ought to be able to see the Namespaces with CNI Type Antrea being reported back to the NSX Manager from the Kubernetes cluster under the Namespaces page.
Nodes can be seen in a similar way under System > Fabric > Nodes > Container Clusters page. This demonstrates successful inventory polling in NSX Manager using the Antrea NSX adapter installed on the Kubernetes cluster.
And that’s it for this post. In the next part of this blog series, I will showcase how to use the advanced features of Antrea and how to enforce network policies.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.