NSX-T, since its birth has gained a lot of momentum in just couple of years and can be easily considered as VMware’s next generation product for multi-hypervisor environments, container deployments, and native workloads running in public cloud environments. NSX-T truly provides a scalable network virtualization and micro-segmentation platform.
This blog series is focussed more on implementation of NSX-T, rather than theoretical concepts. If you are new to NSX-T, I would highly recommend reading VMware’s official documentation
The first post of this series is focussed on deploying NSX-T Managers, which forms management & control plane setup, so its a good idea to have understanding of NSX-T Architecture before going ahead.
NSX-T manager can be deployed in following form factors:
Note: Current version of NSX-T is 3.0.1 and can be downloaded from Here
In my lab I have a 4 node vSAN cluster and vSphere 7 installed. All my hosts are equipped with 2 10 physical NIC’s. I have also configured distributed switch and have necessary portgroups created there.
NSX-T Manager Deployment
NSX-T manager is available in ova format and deployment is pretty straight forward as any other vmware product, so I am not covering ova deployment steps here.
Since NSX-T 2.4, control plane was merged with management plane and when you deploy NSX-T managers, you are forming both management plane & control plane.
Once first NSX manager has been deployed and booted up, connect to the appliance by typing https://<nsx-t-fqdn>/ and sing admin credentials.
Post accepting Eula, you will see an informative message about deploying additional manager nodes to form a 3 node cluster. For Lab/PoC environments, 1 node cluster works just fine, but in production environment 3 node is recommended.
In my lab I am running a 3 node cluster as I try to keep my lab identical (as much as I can) to my production environment.
We can deploy additional nodes from NSX UI or can directly deploy from ova (like 1st manager node deployment). But if you choose to do ova way, you have to manually join additional nodes to the first nsx manager node.
When we deploy from UI, NSX manager takes care of this automatically.
Before deploying additional manager nodes, we need to register compute manager. To add a compute manager, navigate to Home > System > Fabric> Compute Managers and click on Add.
Provide a name for the compute manager and punch in your vcenter fqdn and credentials to register vcenter with nsx manager.
Make sure enable trust option is turned on.
Once compute manager is registered, proceed with deploying additional nodes by navigating to Home > System > Appliances and clicking on ‘Add NSX Appliance’ option.
Provide information about the hostname/ip address etc of the second appliance and choose appropriate form factor as per your infrastructure.
Select cluster/network/datastore for the new appliance node and hit next.
Punch in password for admin, root & audit users and optionally you can enable ssh access to the appliance.
Repeat the process for the 3rd node and wait for deployment to complete and cluster to get stable.
Next task is to set a VIP for the NSX-T manager cluster. Click on Set Virtual IP option to specify the VIP address.
Logout of first NSX manager and connect to NSX via its VIP to ensure its working.
In my lab, I have a DNS record sddc-nsxt mapped to IP address which I used for VIP configuration and you can see in below screenshot, I have connected to VIPaddress, instead of connecting directly to one of the NSX-T manager node.
Verifying NSX-T Cluster Health via CLI
Login to any of the NSX-T manager node via ssh and run command: get managers
This command lists all nsx-t nodes that are part of cluster.
1 2 3 4 |
sddc-nsxt01> get managers - 172.16.10.33 Connected (NSX-RPC) - 172.16.10.31 Standby (NSX-RPC) - 172.16.10.32 Connected (NSX-RPC) * |
Command “get cluster status” shows overall cluster and individual node status as well.
And that’s it for this post.
In next post of this series, we will learn about setting up profiles and transport zones etc.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂