Marking User Created Transport Zone as Default TZ in NSX-T

For a freshly deployed NSX-T environment, you will find 2 default transport zones created:

  • nsx-overlay-transportzone
  • nsx-vlan-transportzone

These are system created TZ’s and thus they are marked as default. 

You can consume these TZ’s or can create a new one as per your infrastructure. Default TZ’s can’t be deleted.

Any newly created transport zone doesn’t shows up as default. Also when creating a new TZ via UI, we don’t get any option to enable this flag. As of now this is possible only via API. 

Creating a new Transport Zone with the “is_default” flag to true will work as intended, and the “is_default” flag will be removed from the system created TZ and the newly created TZ will be marked as the default one. 

Note: We can modify a TZ and mark it as default post creation as well. 

Let’s have a look at properties of system created transport zone.

To set a manually created TZ as default, we have to remove “is_default” flag from the system created TZ and then create a new TZ or modify existing TZ with this flag.Read More

AVN Based Bringup Without BGP Support in VCF

Application Virtual Network was first introduced in VCF 3.9.1. AVN networks are nothing but software-defined overlay networks that spans across zone of clusters and traverse NSX-T Edge Gateways for their North-South traffic (ingress and egress).

One of the requirement for an AVN enabled SDDC bringup was to configure BGP on NSX-T edges. In production environment, BGP routing is not an issue, but there are situations (Lab/POC) when you don’t have BGP support available and that can be an hindrance in implementing and testing AVN.

In this post I am gonna propose a workaround which you can implement in your lab to test this feature. To perform AVN based SDDC bringup, we can leverage static routes instead of BGP. Below are high level steps for doing so.

Step 1: Download VCF configuration workbook and fill in all the details. In Deploy Parameters tab of the spreadsheet, fill BGP specific details with some dummy data.Read More

TKG Series-Part 4: Deploy Sample Workload/Application

In last post I completed the workload cluster deployment. The deployed cluster is now ready to be consumed. In this post I will show how we can deploy a sample application/workload in the newly provisioned kubernetes cluster.

If you have missed earlier post of this series, you can read them from below links:

1: TKG Infrastructure Setup

2: TKG Management Cluster Deployment

3: TKG Workload Cluster Deployment

To deploy any application in the kubernetes cluster, we first have to connect to the workload cluster context.

Below command shows that I am currently connected to cluster “mj-tkgc01”, which is my workload cluster.

Note: We can use command kubectl config use-context <cluster-context-name> to switch between the clusters.Read More

TKG Series-Part 3: Deploy TKG Workload (Kubernetes) Cluster

In last post of this series, I covered TKG management cluster setup. In this post we will learn how to deploy TKG workload/compute/kubernetes cluster.

If you have missed earlier post of this series, you can read them from below links:

1: TKG Infrastructure Setup

2: TKG Management Cluster Deployment

Before attempting to deploy TKG workload cluster, ensure you are connected to management cluster context. If you are not already connected to the context, you can use command kubectl config use-context <mgmt-cluster-context-name> to connect to the same.

Step 1: Create a new namespace

An example json file is located in kubernetes official website which I have used to create a test namespace for me. You can create your own json if you want.

# kubectl create -f https://kubernetes.io/examples/admin/namespace-dev.json

Step 2: Create yaml file for workload cluster deployment.

# tkg config cluster mj-tkgc01 –plan dev –controlplane-machine-count 1 –worker-machine-count 3 –namespace development –controlplane-size small –worker-size small > dev.yamlRead More

TKG Series-Part 2: Deploy TKG Management Cluster

In first post of this series, I discussed about prerequisites that you should meet before attempting to install TKG on vSphere. In this post I will demonstrate TKG management cluster deployment. We can create TKG Management cluster using both UI & CLI. Since I am a fan of cli method, I have used the same in my deployment.

Step 1: Prepare config.yaml file

When TKG cli is installed and tkg command is invoked for first time, it creates a hidden folder .tkg under home directory of user. This folder contains config.yaml file which TKG leverages to deploy management cluster. 

Default yaml file don’t have details of infrastructure such as VC details, VC credentials, IP address etc to be used in TKG deployment. We have to populate infrastructure details in config.yaml file manually.

Below is the yaml file which I used in my environment.

Read More

TKG Series-Part 1: Lab Setup

This blog series will cover how to deploy Tanzu Kubernetes Grid on vSphere and get your management and workload clusters provisioned. In part 1 of this series, I will cover the prerequisites that need to be met before attempting to install & configure TKG on vSphere.

Before you start with TKG deployment, make yourself familiar with the components that make up the TKG cluster. 

Hardware & Software Requirements

  • A vSphere environment with vSphere 6.7 U3 or 7.0 installed.
  • A dedicated resource pool to accommodate TKG Management & Workload cluster components.
  • A VM folder where TKG VM’s will be provisioned.
  • One DHCP enabled network segment. TKG VM’s get IP from dhcp pool.
  • TKG Ova’s & TKG CLI downloaded from here
  • A linux vm (ubuntu preferred) created in vSphere with docker & kubectl installed. This vm act as bootstrap vm on which we will install TKG CLI and other dependencies.

My Lab Walkthrough

  • I have a 4 node vSphere cluster with vSphere 6.7 U3 installed and licensed with Enterprise plus license
  • Build numbers used for ESXi & vCenter are 16316930 & 16616668 respectively.
Read More

NSX-T Multi-Tier North-South Packet Walk

In my last post, I explained Egress/Ingress packet flow in a single-tier routing topology where logical segments are attached directly to T0 gateway.

In this article I will explain the same for a multi-tier routing topology in NSX-T.

Here is the topology which I have used in my lab.

NS-Multi-Tier-Routing-Topology

Egress to Physical Network

Scenario: VM 1 with IP 192.168.10.2 is connected to logical segment App-LS and wants to communicate with a VM with IP 10.196.88.2 which is out there on physical network.

Step 1: VM 1 sends packet to its default gateway (192.168.10.1) which is LIF IP on T1-DR. 

NS-Packet-Walk09

Step 2: T1 DR checks its forwarding table to make a routing decision. Since route to network 10.196.88.x doesn’t exist in forwarding table, T1-DR sends the packet to its default gateway (100.64.0.0) which is the DR instance of Tier-0 on the same hypervisor.

T1-DR-FWD-Tabl

Step 3: The packet is sent to the T0 DR instance over internal segment (Router-Link). Read More

NSX-T Single-Tier North-South Packet Walk

In last post of NSX-T series, I demonstrated East-West packet flow and discussed various cases around that. In this post I will explain how packets are forwarded in case of northbound/southbound traffic. 

Before you start reading this article, please ensure you have fair understanding of NSX-T routing architecture and how SR & DR component of logical router work together. Also knowledge of TEP/MAC/ARP table formation is handy when trying out packet flow in lab/prod.

Here is the lab topology that I am going to use to demonstrate N-S packet walk.

Note: Below topology is single-tier routing topology.

N-S Topology

Egress to Physical Network

Here is how a packet traverse when VM 1 which is on App-LS logical segment tries to communicate with VM 2 which is out there on physical network.

Step 1: VM 1 sends layer 2 packet to its default gateway (192.168.10.1) which is a LIF on DR component on hypervisor node.Read More

NSX-T East-West Routing Packet Walk-Part 1

In my last post on the NSX-T series, I explained how VTEP, MAC & ARP table is constructed. This knowledge is needed to understand packet flow.

In this post, I will demonstrate how packet forwarding is performed for East-West traffic.  

NSX-T has an inbuilt tool called Traceflow, which is very handy when analyzing packet flow within/across segments. This tool is located under Plan & Troubleshoot > Troubleshooting tools > Traceflow in NSX-T UI.

This tool is very easy to use and you just need to select the source vm & destination vm and click on trace to start packet flow analysis.

EW01

There are 2 deployment models available for T1 gateways. We can instantiate the T1 gateway on an edge cluster or we can choose not to associate with any edge cluster. If we need stateful services on the T1 gateway, we go with the first deployment model. 

In part-1 of this post, I will demonstrate packet walk when T1 is associated with edge-cluster.Read More

TEP, MAC & ARP Table Formation in NSX-T

In this post I will explain how NSX-T create and maintains various table which forms the building block of logical switching.  Basically I will discuss about formation of below tables:

  • VTEP Table
  • MAC Table
  • ARP Table

These tables are continuously updated and modified as we provision new workloads and create new segments. 

VTEP Table

This table holds the VNI to TEP IP mapping. A couple of points before we start.

  • Each segment has a unique identifier called VNI. 
  • Each transport node in that TZ will have a TEP IP. 

Lets understand TEP table creation with the help of below diagram.

Step 1: As soon as a segment is created in a TZ, all transport node of that TZ updates its local TEP table and registers VNI of the created segment against its TEP IP. Each transport node then send this info to Local Control Plane (LCP).

VNI_TEP-01

Note: VTEP can be viewed by logging into ESXi host and running command: get logical-switch <ls-uuid> vtep-table

TEP Table

Step 2: Each transport nodes then send their VNI-TEP entry from its LCP to CCP (running on NSX-T Manager).Read More