NSX 4.2 Multitenancy Series – Part 4: NSX Projects Distributed Security

In the last post of this series, I discussed how to create NSX Projects and apply RBAC and Quota policies to them. I also touched base briefly on the project’s security (DFW) aspect.

In this post, I will discuss the project’s security in greater detail. 

If you are not following along, you can read the earlier parts of this series from the below links:

1: NSX Multitenancy Introduction

2: Multitenancy Design Models

3: Creating NSX Projects

One of the key purposes of the Project feature is to allow security policy management to be delegated to the project admin to avoid the danger of rules being applied to the wrong virtual machines.

When an NSX project is created by the Enterprise Admin, the system generates default Distributed & Gateway firewall rules to regulate the default behavior of east-west and north-south traffic for the VMs in the NSX project.  The firewall rules in a project apply only to the VMs created in that project and don’t impact VMs created in other projects or the default space.Read More

NSX 4.2 Multitenancy Series – Part 3: NSX Projects

Welcome to part-3 of the NSX Multi-tenancy series. Parts 1 & 2 of this series were theoretical and mainly focused on multi-tenancy architecture and design models. This part focuses on hands-on, with topics such as creating NSX projects, applying RBAC policies, and creating networking constructs within the project. 

If you are not following along, you can read the earlier parts of this series from the below links:

1: NSX Multitenancy Introduction

2: Multitenancy Design Models

Environment Details

This is a full-fledged NSX deployed environment based on the design documented here; thus, I am not covering the implementation details again. 

The lab is deployed as per the below BOM 

Component Version
VMware ESXi 8.0 Update 3d 
vCenter Server 8.0 Update 3d 
vSAN 8.0 Update 3d 
VMware NSX 4.2.1.2
Backbone Router vYOS 1.5
AD + DNS Server Windows Server 2022

I am using the multi-tenancy with shared provider tier-0 gateway design for this post, where tenants will share the tier-0 gateway and the edge clusters from the provider.Read More

NSX 4.2 Multitenancy Series – Part 2: Design Models

Welcome to part-2 of the NSX Multi-tenancy series. While part-1 focused on the NSX Multi-tenancy solution and its architecture, part 2 will walk you through the multi-tenancy design models.

After the Project is created and the RBAC policies have been applied, the project admin creates the tier-1 gateways and segments for their workloads. Since the tier-0 gateway and edge clusters cannot be created inside a project, the Enterprise Admin is responsible for sharing these objects from the default space with the projects.

Multi-tenancy Design Models

There are 2 design models available as of today for implementing multi-tenancy:

  1. Multi-tenancy with shared Provider (T0 / VRF) gateway.
  2. Multi-tenancy with dedicated Provider (T0 / VRF) gateway.

In both designs, the data plane is shared. Multi-tenancy with an isolated data plane is not yet supported. 

Based on the design models discussed above, tenants can leverage shared or dedicated edge clusters to create networking constructs in a project.Read More

NSX 4.2 Multitenancy Series – Part 1: Introduction

What is Multi-tenancy?

Multi-tenancy is an architecture in which a single software application instance serves multiple customers, each called a tenant. A multi-tenant architecture allows several instances of an application to function in a shared environment. This design works because each tenant is physically integrated but logically independent. This means that a single instance of the software will run on one server and then serve multiple tenants.

Multi-tenancy in VMware NSX

Although Multi-tenancy was first introduced in NSX 4.0.1, it has been available at the data plane layer for several years. VMware NSX uses multi-tiered routing model with logical separation between the different gateways (tier-0 & tier-1) to provide networking connectivity, and these gateways, when deployed as per a specific design, provide isolation to user apps from each other.

For e.g, features like VRF, that allows for having a separate routing domain per tenant, or deploying Tier-1 gateways per application/environment (dev/test/prod) to achieve segmentation.Read More

How to Force Delete a Stale Logical Segment in NSX 3.x

I ran into a problem recently when disabling NSX in my lab where I couldn’t remove a logical segment. This logical segment was previously attached to NSX Edge virtual machines. The logical segments still had a stale port, even after the Edge virtual machines were deleted.

Any attempt to delete the segment through UI/API resulted in the error Segment has 1 VMs or VIFs attached. Disconnect all VMs and VIFs before deleting a segment.” 

GUI Error

API Error

So how do I delete the stale segments in this case? The answer is through API, but before deleting the segments, you must delete the stale VIFs. 

Follow the procedure given below to delete stale VIFs.

1: List Logical Ports associated with the Stale Segment

This API call will list all segments. You can pass the segment uuid in the above command to list a specific segment (stale one).Read More

Quick Tip: Configure NSX Manager GUI Idle Timeout

The NSX Manager Web-UI has a default timeout of 1800 seconds. i.e., the NSX Manager UI will time out after 30 minutes of inactivity. This timeout looks reasonable for a production deployment, but since security is not an issue in lab settings, you might want to change it to a little bit higher. On top of that, it is annoying to get kicked out of the UI after 30 minutes of idle session.

In this post, I will demonstrate how you can change the timeout value.

Run the command get service http to see the currently configured value:

Run the command set service http session-timeout 0 to fully remove the session timeout.Read More

NSX-T Routing With OSPF

Introduction

NSX-T 3.1.1 introduced support for OSPFv2 routing protocol for Tier-0 gateways. This feature was one of the most awaited features for some time. The introduction of OSPF to NSX-T solves one of the major hindrances that was stopping customers from migrating to NSX-T.

There are lots of customers who are still running NSX-V in their environment and OSPF as routing protocol used in their infrastructure. Now since NSX-T supports OSPF, customers can do a greenfield deployment of NSX-T and switch workloads from NSX-V to NSX-T using the L2 bridge and without much changes to their physical network.

Since this feature is pretty new, it will be interesting to see how soon customers adopt this in their environment. 

Disclaimer: This post is inspired by an original blog post written by  Peter Milchov

Before jumping into the lab, let’s revisit some important facts associated with OSPF support.

  • NSX-T 3.1.1 supports OSPFv2 only.
Read More

Layer 2 Bridging With NSX-T

In this post, I will be talking about the Layer 2 Bridging functionality of NSX-T and discuss use cases and design considerations when planning to implement this feature. Let’s get started.

NSX-T supports L2 bridging between Overlay logical segments and VLAN-backed networks. When workloads connected to the NSX-T overlay segment require L2 connectivity to either VLAN-connected workloads or need to reach a physical device (such as a physical GW, LB, or Firewall), NSX-T Layer 2 bridge can be leveraged. 

Use Cases of Layer 2 Bridging

A few of the use cases that come to the top of my mind are: 

1: Migrating workloads connected to VLAN port groups to NSX-T overlay segments: Customers who are looking for migrating workloads from legacy vSphere infrastructure to SDDC can leverage Layer 2 Bridging to seamlessly move their workloads.

When planning for migrations, some of the challenges associated with migrations are Re-IP of workloads, migrating firewall rules, NAT rules, etc.Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Getting Started With NSX ALB: Part-3-NSX-T Integration

In the previous post of this series, I discussed Avi controller deployment and basic configuration. It’s time to integrate NSX-T with NSX ALB. High-level steps of NSX-T integration can be summarized as below:

  • Create a Content Library in vCenter
  • Deploy a Tier-1 gateway for Avi Management.
  • Create Logical Segments in NSX-T for Avi SE VM’s.
  • Create credentials for NSX-T and vCenter in Avi.
  • Register NSX-T with Avi Controller.
  • Create an IPAM profile. 

Let’s get started.

Create a Content Library in vCenter

Deployment of Avi Service Engine VM’s is done automatically by Avi Controller when we create Virtual Service. For this to work, an empty content library needs to be created in vCenter server as the controller pushes Avi SE ova into the content library and then deploys the SE VM’s. 

Deploy Tier-1 gateway for Avi Management

You can use the existing Tier-1 gateway or can deploy a new one (dedicated) for Avi management.Read More