VMware Hybrid Cloud Extension (HCX) delivers secure and seamless app mobility and infrastructure hybridity across vSphere 5.0+ versions, on-premises and in the cloud. HCX can be considered a migration tool that abstracts both on-premises and cloud resources and presents them as a single set of resources that applications can consume. Using HCX, VMs can be migrated bi-directionally from one location to another with (almost) total disregard for vSphere versions, virtual networks, connectivity, etc.
One of the coolest features of HCX is the layer-2 extension feature, and in this post, I will be talking about the same.
HCX Network Extension
HCX’s network extension service provides a secure layer 2 extension capability (VLAN, VXLAN, and Geneve) for vSphere or 3rd party-distributed switches and allows the virtual machine to retain an IP/MAC address during migration. Layer 2 extension capability is provided through the HCX Network Extension appliance, which is deployed during Service Mesh creation and permits VMs to keep their IP & MAC addresses during a virtual machine migration. … Read More
In last post I covered the steps of configuring VRF gateways and attached Tier-1 gateway to VRF. In this post I am going to test my configuration to ensure things are working as expected.
Following configuration was done in vSphere prior to VRF validation:
Tenant A VM is deployed and connected to segment ‘Tenant-A-App-LS’ and have IP 172.16.70.2
Tenant B VM is deployed and connected to segment ‘Tenant-B-App-LS’ and have IP 172.16.80.2
Connectivity Test
To test connectivity, I first picked Tenant-A vm and performed following tests:
A: Pinged default gateway and got ping result.
B: Pinged default gateway of Tenant-B segment and got the result.
C: Pinged Tenant-B VM and got result.
D: Pinged a server on physical network and got ping response.
Same set of tests I performed for Tenant-B VM and all test results passed.
Traceflow
Traceflow is another way of testing connectivity between vm’s. Below are my traceflow results for the 2 vm’s:
Here is the topology diagram created by NSX-T to show path taken by packet from Tenant-A-App01 vm to Tenant-B-App01 vm.… Read More
NSX-T provides true multi-tenancy capabilities to a SDDC/Cloud infrastructure and there are various ways of achieving it based on the use cases. In the simplest deployment architecture, multi-tenancy is achieved via connecting various Tier1 gateways to a Tier-0 gateway and each T1 gateway can belong to a dedicated tenant. Another way of implementing this is to have multiple T0 gateways, where each tenant will have their dedicated T0.
Things have changed with NSX-T 3.0. One of the newest feature that was introduced in NSX-T 3.0 was VRF (virtual routing and forwarding) gateway aka VRF Lite.
VRF Lite allows us to virtualize the routing table on a T0 and provide tenant separation from a routing perspective. With VRF Lite we are able to configure per tenant data plane isolation all the way up to the physical network without creating Tier0 gateway per tenant.
VRF Architecture
At a high level VRF architecture can be described as below:
We have a parent Tier0 gateway to which multiple VRF connects.… Read More
Recently while working in my lab, I was trying to setup VRF Litein NSX-T 3.0, I came across a bug which was preventing me to turn-on BGP on VRF gateway via UI. This bug has affected 3.0.1 and 3.0.1.1 versions of NSX-T. The error which I was getting when trying to enable BGP was:
“Only ECMP, enabled, BGP Aggregate to be configured for VRF BGP Config”
After researching for a bit, I figured out that currently there is no resolution of this issue and API is the only method via which BGP can be turned on on VRF gateway. Below are the steps of doing so.
To enable BGP, we need to take complete API output from step-3 and change “enabled”: false to “enabled”: true and pass this output as payload of PATCH call.… Read More
For a freshly deployed NSX-T environment, you will find 2 default transport zones created:
nsx-overlay-transportzone
nsx-vlan-transportzone
These are system created TZ’s and thus they are marked as default.
You can consume these TZ’s or can create a new one as per your infrastructure. Default TZ’s can’t be deleted.
Any newly created transport zone doesn’t shows up as default. Also when creating a new TZ via UI, we don’t get any option to enable this flag. As of now this is possible only via API.
Creating a new Transport Zone with the “is_default” flag to true will work as intended, and the “is_default” flag will be removed from the system created TZ and the newly created TZ will be marked as the default one.
Note: We can modify a TZ and mark it as default post creation as well.
Let’s have a look at properties of system created transport zone.
To set a manually created TZ as default, we have to remove “is_default” flag from the system created TZ and then create a new TZ or modify existing TZ with this flag.… Read More
Application Virtual Networkwas first introduced in VCF 3.9.1. AVN networks are nothing but software-defined overlay networks that spans across zone of clusters and traverse NSX-T Edge Gateways for their North-South traffic (ingress and egress).
One of the requirement for an AVN enabled SDDC bringup was to configure BGP on NSX-T edges. In production environment, BGP routing is not an issue, but there are situations (Lab/POC) when you don’t have BGP support available and that can be an hindrance in implementing and testing AVN.
In this post I am gonna propose a workaround which you can implement in your lab to test this feature. To perform AVN based SDDC bringup, we can leverage static routes instead of BGP. Below are high level steps for doing so.
Step 1: Download VCF configuration workbook and fill in all the details. In Deploy Parameters tab of the spreadsheet, fill BGP specific details with some dummy data.… Read More
In last post I completed the workload cluster deployment. The deployed cluster is now ready to be consumed. In this post I will show how we can deploy a sample application/workload in the newly provisioned kubernetes cluster.
If you have missed earlier post of this series, you can read them from below links:
Before attempting to deploy TKG workload cluster, ensure you are connected to management cluster context. If you are not already connected to the context, you can use command kubectl config use-context <mgmt-cluster-context-name> to connect to the same.
Step 1: Create a new namespace
An example json file is located in kubernetes official website which I have used to create a test namespace for me. You can create your own json if you want.
Step 2: Create yaml file for workload cluster deployment.
# tkg config cluster mj-tkgc01 –plan dev –controlplane-machine-count 1 –worker-machine-count 3 –namespace development –controlplane-size small –worker-size small > dev.yaml… Read More
In first post of this series, I discussed about prerequisites that you should meet before attempting to install TKG on vSphere. In this post I will demonstrate TKG management cluster deployment. We can create TKG Management cluster using both UI & CLI. Since I am a fan of cli method, I have used the same in my deployment.
Step 1: Prepare config.yaml file
When TKG cli is installed and tkg command is invoked for first time, it creates a hidden folder .tkg under home directory of user. This folder contains config.yaml file which TKG leverages to deploy management cluster.
Default yaml file don’t have details of infrastructure such as VC details, VC credentials, IP address etc to be used in TKG deployment. We have to populate infrastructure details in config.yaml file manually.
Below is the yaml file which I used in my environment.
This blog series will cover how to deploy Tanzu Kubernetes Grid on vSphere and get your management and workload clusters provisioned. In part 1 of this series, I will cover the prerequisites that need to be met before attempting to install & configure TKG on vSphere.
Before you start with TKG deployment, make yourself familiar with the components that make up the TKG cluster.
Hardware & Software Requirements
A vSphere environment with vSphere 6.7 U3 or 7.0 installed.
A dedicated resource pool to accommodate TKG Management & Workload cluster components.
A VM folder where TKG VM’s will be provisioned.
One DHCP enabled network segment. TKG VM’s get IP from dhcp pool.
A linux vm (ubuntu preferred) created in vSphere with docker & kubectl installed. This vm act as bootstrap vm on which we will install TKG CLI and other dependencies.
My Lab Walkthrough
I have a 4 node vSphere cluster with vSphere 6.7 U3 installed and licensed with Enterprise plus license
Build numbers used for ESXi & vCenter are 16316930 & 16616668 respectively.