NSX 4.2 Multitenancy Series – Part 2: Design Models

Welcome to part-2 of the NSX Multi-tenancy series. While part-1 focused on the NSX Multi-tenancy solution and its architecture, part 2 will walk you through the multi-tenancy design models.

After the Project is created and the RBAC policies have been applied, the project admin creates the tier-1 gateways and segments for their workloads. Since the tier-0 gateway and edge clusters cannot be created inside a project, the Enterprise Admin is responsible for sharing these objects from the default space with the projects.

Multi-tenancy Design Models

There are 2 design models available as of today for implementing multi-tenancy:

  1. Multi-tenancy with shared Provider (T0 / VRF) gateway.
  2. Multi-tenancy with dedicated Provider (T0 / VRF) gateway.

In both designs, the data plane is shared. Multi-tenancy with an isolated data plane is not yet supported. 

Based on the design models discussed above, tenants can leverage shared or dedicated edge clusters to create networking constructs in a project. Let’s examine the design models in detail.

Multi-tenancy with Shared Provider Gateway

As the name implies, this model shares the edge cluster and the provider gateway with NSX projects. The VMs deployed inside the project run on transport nodes that are shared across the projects. The transport nodes are part of the same overlay transport zone and hence there is no data plane isolation between the projects. 

Since all transport nodes share the same transport zone, logical segments created inside any project are realized on all ESXi hosts that are part of the compute cluster prepared for NSX, but the segments are not visible across projects at the management plane level. Each tenant will be able to see only the segments created inside their projects. All tenant traffic will be routed through the same provider shared edge nodes and physical fabric to ensure N-S connectivity.

Multi-tenancy with Dedicated Provider Gateway

In this model, each NSX Project has a dedicated Provider Tier-0 or VRF gateway. Each tier-0 gateway is instantiated on a dedicated edge cluster and hence this model consumes additional compute resources. This model allows tenant traffic to egress through a specific physical fabric (external networks/firewall).

The below diagram shows the conceptual topology of the dedicated gateway design.

In this design, tenant workloads can run on a dedicated compute cluster or shared cluster, that is part of the same default overlay transport zone. Design where compute clusters are prepared for different transport zones is not yet supported. 

Tenant traffic egresses from a dedicated edge cluster for N-S connectivity. Edge clusters can BGP peer with the same ToRs (over separate uplink VLANs) or ToRs hosted in a different physical zone. To control the egress traffic further, Enterprise Admin can implement route filtering per tenant to restrict subnets that will be advertised to the northbound routers. 

And that’s it for this post. In the next post of this series, I will walk through the steps of creating projects and networking constructs within projects. Stay tuned!!!

References

This post is inspired from the original post by a good friend of mine. You can visit his blog vxplanet.com for stellar NSX content. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.