Scaling Out the VMware vDefend Security Services Platform

The VMware vDefend Security Services Platform (SSP) was built for scale-out from day one. Whether you’re expanding a proof-of-concept deployment to a production deployment, enabling ATP services (minimum 5 nodes), or going to a full-blown ten-node production deployment, the path is consistent. This post walks you through everything you need to know.

Why Scale-Out Matters for Modern Private Clouds

Traditional perimeter-based security assumed that traffic patterns were predictable and applications sat in stable locations. That world no longer exists. East-west traffic inside a VMware Cloud Foundation (VCF) private cloud can be four times greater than north-south traffic, and the volume keeps growing as application estates expand.

vDefend SSP is Broadcom’s answer to this problem: a self-contained, Kubernetes-backed security analytics platform that sits alongside NSX and processes Security Intelligence, Network Detection and Response (NDR), Malware Prevention, and Network Traffic Analysis (NTA). When your workloads grow, your security analytics capacity grows with them.

A quick recap of the SSP Architecture

Before touching a scale-out operation, let’s recap the SSP architecture. The platform has two primary components: the SSP Installer (SSPI) and the SSP Instance. The SSPI is a lifecycle management appliance, delivered as an OVA, that orchestrates deployment, upgrades, and diagnostics. The SSP Instance is the actual Kubernetes cluster that runs the security microservices. The control plane of the K8s cluster handles coordination and configuration. The worker nodes are the data plane—they ingest and process network flow telemetry. When you scale out, you are adding worker nodes. Control plane size stays fixed.

Scale-Out Prerequisites

A scale-out operation is non-reversible. You cannot scale-in after you have added nodes. Run through the checklist below before initiating any expansion.

1: DRS is enabled on the vSphere cluster where SSP will be deployed. Disabling DRS after deployment causes issues with the node placement.

2: IP address pool is pre-sized—Each worker node requires IP addresses from the SSP node pool. The table below lists the IP requirements for worker node scale-out.

3: Verify vCPU and memory headroom—A full ten-worker, three-control-plane deployment requires approximately 192 vCPUs and 734 GB of vRAM. Audit cluster capacity before scaling.

The table below shows the total compute and storage capacity required for scaling out worker nodes.

4: Backup SSP state—SSP 5.x includes built-in backup and restore. Trigger a backup snapshot before any topology change.

When You Should Scale Out the Platform

The trigger for scaling out should be flow ingestion rate, not raw VM count. Monitor the SSP dashboard for signs of worker saturation—increased analysis latency or dropped telemetry events. These are the leading indicators that you need additional worker capacity.

Broadcom provides a Sizing Tool for SSP to right-size the deployment. The recommendation is to deploy SSP with 4 worker nodes and allow the system to run and collect flow stats for at least seven days. The tool must be run from the SSPi VM. If the current count of worker nodes is insufficient to process the flows, the tool recommends adding additional worker nodes. An example output is shown below for reference.

Scale-out Process

The scale-out workflow is handled entirely through the SSPI web interface. The process is a rolling addition—existing workers continue processing traffic while new nodes join the cluster.

1: Log in to the SSP Installer web interface. Use the admin account configured during initial deployment.

2: Verify current cluster health: Before adding nodes, confirm all existing control plane and worker nodes are in a healthy state. The diagnostics utility provides a per-node status view. Do not scale on a degraded cluster.

3: Initiate the scale-out workflow: Navigate to the SSP instance management > SSP Instance tab. Click ‘Edit Deployment Size.’

Specify the worker node count and click Save.

4: Monitor scale-out process: Login to the vCenter UI and monitor the scale-out process. Based on the number of worker nodes selected, additional nodes will be deployed.

The SSPi UI also shows the progress of the scale-out operation.

Each new worker node boots, registers with the Kubernetes control plane, and begins accepting data. Once the process finishes, the worker node count is updated.

5: Validate with diagnostics: After all nodes reach the running state, navigate to the diagnostics tab to confirm telemetry is flowing through the new worker nodes.

Scaling SSP Core Features

You can scale out all core SSP services: Analytics, Data Storage, Messaging, and Metrics by logging in to the SSP UI and navigating to the Platform & Features tab and selecting the Scale Out option.

Note: You must have 6 worker nodes for feature scale-out.

If the number of worker nodes is less than 6, you will get the following warning, and the scale-out button will be grayed out.

Conclusion

VMware vDefend SSP’s scale-out architecture is one of its most compelling operational qualities. The Kubernetes-backed design means that adding capacity is a configuration operation, not a re-architecture. You start with what you need, monitor the saturation of the flow ingestion, and add workers as demand grows—all without rebuilding your security services layer or touching the NSX manager.

That’s it for this post. I hope you enjoyed reading it. Feel free to share it on social media if it’s worth sharing.

Leave a Reply