VCF-9 – Part 10: Deploy VKS with NSX VPCs

Welcome to part 9 of the VCF-9 series. The previous post in this series discussed VPC networking in greater detail. In this post, I will demonstrate how to deploy vSphere Kubernetes Service (VKS) in an NSX VPC.

If you are not following along, I encourage you to read the earlier parts of this series from the links below:

1: VCF-9 Architecture & Deployment Models

2: VCF Installer Walk-through

3: VCF-9 Networking Models

4: NSX Edge Cluster Deployment

5: ESXi Host Commission in VCF

6: Deploying a Workload Domain

7: Deploy VCF Operations for Logs

8: VPC Creation with Centralized Networking

9: VPC Networking Deep Dive

VKS, when integrated with NSX VPCs, enables self-service, secure, and automated network and security consumption for Kubernetes clusters within an NSX Virtual Private Cloud (VPC). This approach provides users with a simplified, self-service model to manage network segments, security policies, and external connectivity for their applications, all within predefined infrastructure guardrails set by the administrator. The integration enables faster application deployment by abstracting complex network configurations while maintaining strong multi-tenancy and security isolation between different users and workloads.

Key Aspects of VKS with NSX VPC Integration

  • Self-Service Networking: Application owners can provision and manage their network resources, such as subnets, within their allocated NSX Project and VPC, reducing reliance on the infrastructure team for these tasks.
  • Multi-Tenancy: NSX Projects and VPCs provide strong isolation for multiple tenants, ensuring that network and security policies for one tenant do not impact another.
  • Enhanced Security: Tenants can define their own distributed firewall rules and security policies at the application level, while the infrastructure team sets the overall security posture and policy guardrails.
  • Automated Network Provisioning: The integration automates the creation of network and security objects, such as segments and routers, making it quicker and more efficient to deploy applications.
  • Centralized Management: While tenants receive self-service access, administrators maintain a centralized view for monitoring, managing, and applying resource quotas to various NSX Projects and VPCs.
  • Modern Application Support: This integration is key to modern cloud-native strategies, enabling developers to deploy and manage applications on scalable, automated, and secure private cloud infrastructure built on NSX and VKS.

Requirements of VKS Deployment in VPC

  • Management network for the supervisor cluster.
  • vSphere Cluster with HA and DRS (Fully Automated) Enabled
  • NSX Edges deployed in at least medium form factor. VKS can’t be deployed on small-sized edge VMs.
  • The Tier-0 gateway is deployed in Active/Standby HA mode.
  • Service CIDR
  • DNS and NTP server IPs.

Note: If you are planning to deploy VKS without using VPCs, you can have the tier-0 gateway in Active/Active HA mode.

Default Project or Predefined Project?

VKS can be deployed either in the default project or a predefined project. The project needs to be created from the NSX UI if you choose the latter option. Along with the project, you need to create a VPC and public/private subnets in the VPC. For the creation of the vSphere namespaces, you can either create additional NSX projects or use the same project as for the supervisor cluster.

Let’s dive into the lab and see the deployment in action.

Step 1: Login to the vCenter UI and ensure the edge cluster is deployed and the external IP block is configured for VPC.

Step 2: (Optional) Create an NSX Project

In my lab, I am deploying VKS in a dedicated Project/VPC. To create a new project, login to the NSX Manager UI and click the Manage button in the default space drop-down menu.

Specify the project’s name and select the tier-0 gateway and edge cluster created by the provider. Select the external connection and the external IP address block for the project.

Provide a meaningful short log identifier for the project. This identifier helps in troubleshooting scenarios, as the log files will be prepended with the identifier name, and it’s easy to filter logs.

Step 3: Switch to the newly created project and navigate to Networking > Transit gateways and create a new transit gateway.

The transit gateway inherits the HA mode of the tier-0 gateway.

Step 4: Modify the VPC Connectivity/Service profile to specify the VPC private transit gateway network and DNS/NTP server IPs, etc.

Navigate to the Networking > Profiles tab to modify the VPC service and connectivity profiles.

Step 5: Create NSX VPC

Navigate to the VPC Overview tab and click Add VPC.

Configure the VPC profiles and choose the private IP CIDRs for the VPC.

Click Continue to go to the subnet creation page.

Step 6: Create VPC subnets.

Click on the set button to create the subnets. You need a public and private subnet for the VKS deployment. If you are planning to deploy namespaces in their dedicated VPC, then you need the private transit gateway subnet as well.

In my lab, I created the following subnets.

Step 7: Deploy Supervisor Cluster

In the vCenter UI, select “Supervisor Management” from the main menu.

Click “Assign Content Library”

Select the content library and click Assign.

Select the “VCF Networking with VPC” networking stack.

Select “Cluster Deployment” and provide the supervisor cluster name. Select a compatible cluster from the list and go to the next screen.

Select the storage policy for the supervisor control plane and hit Next.

Select the network mode for the management network.

The VPC public subnet is used as the supervisor’s management network. If you have not enabled DHCP on this subnet, manually fill in the IP details and hit Next.

Select the project where the supervisor will be activated and choose the VPC connectivity profile that you created earlier.

The external IP address and transit IP address blocks are auto-populated. Optionally, you can specify the private CIDRs and service CIDR.

Select the size for the supervisor control plane VM.

You can export the configuration for a quick deployment next time.

Click Finish to start the supervisor deployment.

It takes around 45 minutes for the deployment to complete. In nested labs, it can take up to 70 minutes.

Click on the supervisor cluster to view/modify settings.

Step 8: Enable Namespace Service

To enable developers to create namespaces in a self-service manner, enable the Namespace Service.

Specify the resource allocation and VM class, storage policy, etc., for the namespace template.

Add users/groups to whom you want to grant permission to create a namespace. For this to work, you must have your identity source integrated in vCenter and users/groups created in your LDAP.

Review the settings and hit Finish to close the namespace template wizard.

To view the list of installed supervisor services, navigate to Configure > Supervisor Services > Overview tab. Velero and K8 service are installed out of the box.

You can install additional services using the link provided on the services page.

A couple of namespaces get created as part of the supervisor enablement.

As part of the supervisor deployment, the following objects are created in NSX.

  • A VPC with name kube-system_<uuid>
  • A VPC with name vmware-system-supervisor-services-vpc_<uuid>

  • A private subnet in the VPC kube-system

  • An SNAT rule for egress traffic with an IP address assigned from the external IP block of the NSX project.

A private subnet for the kube-system and supervisor-service VPCs. This subnet is based on the CIDR that you defined in the workload network definition.

Step 9: Create Namespace

Navigate to the Namespaces tab and click New Namespace.Select the vSphere cluster where the supervisor is deployed.

 

 

As discussed earlier, a vSphere namespace can be created and assigned to a predefined VPC or to an automatically generated VPC as part of the vSphere Namespace creation process. The automatically created VPC is created under the default NSX project. The predefined VPC can be hosted in a dedicated NSX project or in the default NSX Project.

When you want to use a predefined VPC for the namespace, select the checkbox “I would like to override the default network settings

When you uncheck the box, the create namespace workflow creates a private IP block, a VPC, a load balancer, etc., in the NSX project where the supervisor is deployed.

Auto-created IP address block in the system project.

Auto-created VPC in the system project.

Auto-created load balancer in the namespace VPC.

When you want to use a predefined VPC for the vSphere namespace, you have to create the VPC first and enable it for the NSX load balancer. By default, the NSX UI only allows enabling the Avi load balancer in the VPC. There is no option for enabling NSX LB in the UI.

The enablement of NSX LB on a VPC can be done via API. Use the below API call for doing so.

Refresh the NSX UI, and you should see NSX LB enabled on the VPC.

Now you can use this VPC for the namespace creation.

And that’s it for this post. In the next post of this series, I will discuss VCF Automation. Stay tuned!!!

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.

Leave a Reply