Welcome to part 6 of the VCF-9 series. The previous post in this series discussed the ESXi host commissioning process. Now it’s time to put those hosts into action by creating a workload domain. This post will guide you on creating a new workload domain.
If you are not following along, I encourage you to read the earlier parts of this series from the links below:
1: VCF-9 Architecture & Deployment Models
4: NSX Edge Cluster Deployment
5: ESXi Host Commission in VCF
A typical VCF deployment includes a management domain and one or more VI workload domains. Each VI workload domain can be configured with specific resources, network configurations, and policies to support its intended workloads. The VI workload domains are isolated from the management domain and are used for hosting business applications and providing cloud-like operations within a private data center.
Key aspects of workload domains:
- Logical Isolation: Workload domains provide a way to logically separate different types of workloads, such as management workloads, development workloads, and production workloads.
- Consistent Management: VCF automates the deployment and management of resources within a workload domain, ensuring consistency and simplifying operations.
- Flexibility and Scalability: Workload domains allow for flexible scaling and expansion of resources as needed, enabling organizations to adapt to changing business requirements.
When it comes to deploying a workload domain, you have different topologies (single rack, multi-racked, stretched, etc.), and depending upon the application needs and business use case, you deploy a supported topology. In this post, I will demonstrate the deployment mimicking a single-rack deployment.
In VCF-9, the deployment of a VI workload domain can be done through both the SDDC manager and VCF operations. I will be demonstrating the steps for VCF operations, as it is the way forward.
Login to the VCF Operations, navigate to the Inventory tab, and click Detailed View
Select the VCF instance to which this workload domain will be added and click Add Workload Domain > Create New.
Review and confirm that the workload domain prerequisites have been met, and click Proceed.
Specify the workload domain name and the SSO domain name for the workload domain vCenter. I have skipped the supervisor configuration, as I will be deploying it later.
Provide the vCenter server FQDN and root user password. If your vCenter hostname exceeds 15 characters, you will get a notification message. Click Acknowledge to proceed to the next screen.
To validate the vCenter server FQDN, click “Show IP Configuration” and verify that the FQDN resolves to the correct IP address.
Provide the cluster name and click Next.
Select a cluster image and click Next. This is the image that is installed on the ESXi hosts in the management domain.
Enter the NSX Manager details and click Next.
The standard deployment size only deploys one NSX Manager node (suitable for lab deployment). In the high-availability mode, the NSX Manager nodes are deployed.
- Appliance Size: Supported values are Medium, Large, and Extra Large.
- Appliance 1 FQDN: Enter the FQDN for the first NSX Manager node.
- Appliance Cluster FQDN: Enter the NSX Manager cluster VIP FQDN.
Set the password for the root, admin, and audit users.
Select the network connectivity mode.
If you select Centralized Connectivity, you have to finish the configuration after the workload domain is created to make it VPC-ready.
Note: If you are planning to deploy the supervisor services, choose Centralized Connectivity.
Select the storage type for your workload domain and click Next.
If you have selected the storage type as vSAN, specify the failure tolerance value. For a 3- or 4-node vSAN cluster, the value can be set to 1.
If all participating disks in vSAN are SSD disks, you can enable vSAN deduplication and compression.
Select the ESXi hosts that will be part of the workload domain.
Note: The wizard only shows the host that has been commissioned successfully and is active in inventory.
To configure the distributed switch, you have 2 options. Select the default profile or create a custom profile.
- If your hosts have only 2 pNICs, you select the default profile, and it will provide a unified fabric for all traffic types using a single vSphere Distributed Switch.
- If your hosts have more than 2 pNICs, you can create a custom profile to isolate storage traffic from the rest of the traffic or isolate your NSX traffic from the regular infrastructure traffic.
When you use a custom profile, multiple distributed switches can be configured. Each distributed switch can hold one or more network traffic configurations.
In my lab, my hosts only have 2 NICs, but still, I selected a custom switch configuration, as I want to demonstrate how to perform a custom configuration.
Enter the distributed switch name and select the number of uplinks that will be attached to this switch.
Map the uplinks to the pNICs of the ESXi host.
Click “Configure Network Traffic” and select Management.
Provide the name for the management group and select the appropriate load balancing algorithm for this port group.
Optionally, you can configure the teaming policy for the port group.
Repeat the steps for creating the vMotion and vSAN port groups.
For configuring NSX traffic, select the transport zone type and specify the ESXi host transport VLAN. For IP allocation, you can use DHCP or create a new static IP pool.
Map the NSX uplinks to VDS uplinks and set the Teaming policy.
Click on the Create Distributed Switch to finish the switch creation wizard.
Note: If your ESXi host has 4 pNICs, and you want to isolate NSX traffic from the rest of the infrastructure, then you would first create a VDS for the management, vMotion, and vSAN traffic and save the switch configuration. Then you create a new VDS for the NSX traffic and map the NSX uplinks to VDS uplinks that will be associated with this new switch.
After creating the VDS configuration, you will be navigated back to the workload creation wizard. Go to the next screen to review the final configuration.
Validate that the supplied inputs look correct.
You can download the configuration for reusability by navigating to the JSON Preview tab and clicking Download JSON.
Click Deploy to start the workload domain creation.
Click the View SDDC Manager Tasks button to see the backend tasks.
Clicking the button takes you to Fleet Management > Tasks view, where you can see all tasks related to workload domain creation.
Sit back and relax for a couple of hours while the workload domain is being deployed. You will see the domain details in the inventory post-deployment.
Now you can start deploying your business applications in the workload domain.
And that’s it for this post. In the next post of this series, I will discuss VCF logs deployment. Stay tuned!!!
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.



























