Table of Contents
Configure the appropriate Load Balancer model for a given application topology
The two main drivers for deploying a load balancer are scaling out an application (by distributing workload across multiple servers), along with improving its high-availability characteristics.
NSX provides basic form of load balancing through Edge Gateway. The NSX Edge load balancer distributes network traffic across multiple servers to achieve optimal resource utilization.
The NSX Edge services gateway supports two kinds of load balancer deployments:
One-armed mode (or proxy mode): In proxy mode, the load balancer uses its own IP address as the source address to send requests to a backend server. The backend server views all traffic as being sent from the load balancer and responds to the load balancer directly. Following events take place when LB is deployed in proxy mode:
- User connects to a VIP address (LB address) that is configured on the Edge gateway.
- The ESG performs a destination NAT to replace the VIP with one of the servers in the configured pool.
- The ESG performs a source NAT to replace the users IP address with its own IP address (VIP).
- The ESG forwards the request to one of the server from pool.
- The server replies to the ESG instead of replying to the user request because the users IP address was replaced by ESG VIP.
- The ESG relays the servers response to the user.
Below image shows one-armed mode deployment
One-armed mode deployment is pretty straight forward, but the downside is that you need to have a load balancer per network segment as the edge router needs to be on the same segment as the internal servers. Also, it can make traffic analysis harder as the traffic is ‘proxied’.
Inline mode (or transparent mode): An in-line load balancer is connected to the network with two network interfaces. In this scenario the load balancer has an external network, and an internal network that’s not directly accessible from the external network. An inline load balancer acts as a NAT gateway for the VMs on the internal network. The traffic flow is displayed in the following diagram:
Following events take place in an inline load balancer deployment model
- User connects to a VIP address (LB address) that is configured on the Edge gateway.
- The ESG performs a destination NAT to replace the VIP with IP address of one the servers from the pool.
- The ESG forwards the request to the server.
- The server receives the request from the ESG with the user IP as the source and replies directly to the user.
- As the server replies to the user, the response goes through the web servers default gateway, which is the ESG.
- The ESG updates the load balancing service and forwards the response to the uplinks.
For more information on differences between the 2 modes, please read VMware NSX Design Guide
Lets jump into lab now and see how to configure load balancer on NSX edge.
Step 1: Generate Certificate
First of all create/specify a SSL certificate to be used with NSX edge. You can import CA certificates or can create a self-sign certificate.I am using a self-signed cert in my lab.
To generate a self-signed certificate, double-click on NSX edge where you want to configure load balancing and navigate to Manage > Settings > Certificates and from Actions tab select “Generate CSR”
Fill up the required fields and hit OK.
Once CSR is generated, from Actions tab select “Self Sign Certificate”
Specify Number of days for which this certificate will be valid.
Now the self-sign certificate will be visible in the list.
Step 2: Enable Load Balancer
To configure load balancer, navigate to Manage > Load Balancer and Edit Global Configuration.
Select “Enable Load Balancer” and hit OK.
- Enable Service Insertion: allows the load balancer to work with third party vendor appliances.
- Acceleration Enabled: When enabled, the NSX Edge load balancer uses the faster L4 LB engine rather than L7 LB engine.
Step 3: Create Application profile
Navigate to Application Profile tab and click on + button
Type a name for the profile and select the traffic type for which you are creating the profile. Optionally you can specify following:
- Persistence: Persistence tracks and stores session data, such as the specific pool member that serviced a client request. This ensures that client requests are directed to the same pool member throughout the life of a session or during subsequent sessions.
- Enable SSL Passthrough: If this is selected then NSX edge do not terminate clients HTTPS (SSL sessions) on edge, rather they are terminated directly on the servers for which edge is load balancing traffic.
- Insert X-Forwarded-For HTTP header: For identifying the originating IP address of a client connecting to a web server through the load balancer.
- If the Enable Pool Side SSL option is enabled in the application profile, the pool selected is composed of HTTPS servers.
In my lab, I am load balancing my web server for https traffic, so I enabled pool side certificates and enabled service certificate under “Virtual Server Certificates” and choose the certificate which I created in first step of this demonstration.
Also under “Pool Certificates”, select the same server certificate and hit OK to finish application profile creation wizard.
Step 4: Create Server Pool
Go to pools tab and click on + button to add a new pool
- Provide a name for the pool and select the appropriate algo. We will discuss about the algo’s later. I selected Round-Robin for this demonstration.
- Select an appropriate service monitor.
- Add members to the pool by clicking on + button.
- Provide a name for the member server which will be part of the pool and specify the IP address of the server.
- Type the port where the member is to receive traffic on and the monitor port where the member is to receive health monitor pings.
- In Weight, type the proportion of traffic this member is to handle.
- Type the maximum number of connections the member can handle.
- Type the minimum number of connections a member should handle before traffic is redirected to the next member.
I have added both my Web-Servers in the newly created pool.
Note: The option Transparent indicates whether client IP addresses are visible to the backend servers.
- If Transparent is not selected, backend servers see the traffic source IP as a Load balancer internal IP.
- If Transparent is selected, source IP is the real client IP and NSX Edge must be set as the default gateway to ensure that return packets go through the NSX Edge device.
Step 5: Create Virtual Servers (VIP)
Under Virtual Servers, click on + button to add a VIP on the edge.
Check mark “Enable Virtual Server” and map the appropriate Application Profile with the virtual server and specify following:
- Provide a name for the virtual server and optional description.
- IP Address: It is the IP address that the load balancer is listening on. This is the IP on one of the external interface of the edge gateway.
- Type the protocol and that the virtual server will handle and the port number that the load balancer will listen on.
- Associate the correct server pool with this virtual server.
- In Connection Limit, type the maximum concurrent connections that the virtual server can process.
- In Connection Rate Limit, type the maximum incoming new connection requests per second.
The load balancer configuration is completed now. Lets test it.
I have verified that both of my web servers are listening on port 443.
Now lets hit the VIP. First time the request was yielded from first Web-Server
Next time when I refreshed the page, the request was server by second web-server of the pool. This is because I choose the “Round Robin” algorithm in my server pool and each server is assigned a weight of 1, so each time request will be served from individual server.
On inspecting the cert, I can see it is coming from my Edge Gateway
You can check the load balancer status by logging into Edge gateway cli
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
Site-A-ESG01-0> show service loadbalancer pool ----------------------------------------------------------------------- Loadbalancer Pool Statistics: POOL WEB-SRV-Pool | LB METHOD round-robin | LB PROTOCOL L7 | Transparent disabled | SESSION (cur, max, total) = (0, 2, 96) | BYTES in = (38579), out = (69366) +->POOL MEMBER: WEB-SRV-Pool/WEB-SRV01, STATUS: UP | | HEALTH MONITOR = BUILT-IN, default_https_monitor:L7OK | | | LAST STATE CHANGE: 2018-05-26 15:30:43 | | SESSION (cur, max, total) = (0, 1, 45) | | BYTES in = (18403), out = (32361) +->POOL MEMBER: WEB-SRV-Pool/WEB-SRV02, STATUS: UP | | HEALTH MONITOR = BUILT-IN, default_https_monitor:L7OK | | | LAST STATE CHANGE: 2018-05-26 15:30:43 | | SESSION (cur, max, total) = (0, 1, 51) | | BYTES in = (20176), out = (37005) Site-A-ESG01-0> |
Configure SSL Off-loading
SSL off-loading function of the the NSX allows Edge gateway load balancer to handle the SSL decryption/encryption before passing the traffic on to the backend server pool. This offloads the task of encryption/decryption from backend servers. With SSL offloading, traffic between the external client and the LB (edge) is HTTPS but between the NSX Edge and the backend servers it is HTTP.
NSX also supports SSL Proxy, which terminates the SSL at the LB but passes the traffic to the backend servers as HTTPS, and it supports SSL Passthrough, which passes the HTTPS traffic through the LB and on to the backend servers to do the SSL termination.
To configure SSL-offloading you need to generate the certificates for Edge LB which we already covered earlier.
Configure a service monitor to define health check parameters for a specific type of network traffic
Service Monitors are definitions of how a server that is being load balanced will be monitored whether it is alive or not and should receive user requests. Server aliveness can be checked via a HTTP or HTTPS request, a TCP or UDP port and/or an ICMP ping.
When using a HTTP(s) request, you can define the polling interval and what type of HTTP request (GET, OPTIONS or POST), which URL to test and define a string that should be received back if test succeeds.
If the response of the request is not what you would expect it to be, the server can be taken out of the pool so it does not receive any new requests.
To create a service monitor, select the NSX edge from the edge list and navigate to Manage > Load Balancer > Service Monitoring and click on + button to define a new monitoring.
- Enter a Name for the service monitor.
-
Enter the Interval in seconds in which a server is to be tested. The interval is the period of time in seconds that the monitor sends requests to the backend server.
-
The timeout value is the maximum time in seconds within which a response from the server must be received.
-
Select the way in which to send the health check request to the server from the drop down menu. Five types of monitors are supported- ICMP, TCP, UDP, HTTP, and HTTPS.
For more information on service monitoring, please read NSX Documentation
Optimize a server pool to manage and share backend servers
A server pool manages backend servers to be able to share workloads efficiently. A server pool manages load balancer distribution methods and uses a server monitor for health checks. NSX supports certain algorithms for balancing methods as listed below:
- IP_HASH: Selects a server based on a hash of the source and destination IP address of each packet.
- LEAST_CONN: Distributes client requests to multiple servers based on the number of connections already on the server. New connections are sent to the server with the fewest connections.
- ROUND_ROBIN: Each server is used in turn according to the weight assigned to it. This is the smoothest and fairest algorithm when the server’s processing time remains equally distributed.
- URI: The left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result designates which server will receive the request. This ensures that a URI is always directed to the same server as long as no server goes up or down.
Load balancing algorithm can be changed even after pool creation. To change the algorithm to use with your load balancer, by navigating to your server pool and changing it in the Algorithm drop down.
Also you can edit the pool members to modify weight and other parameters.
Configure an application profile and rules
Application Profiles enhances control over network traffic and make traffic management easy and efficient. Application profiles define the behavior of a particular type of network traffic. I have already covered the steps for creating application profile earlier so I will not repeat those steps again here.
Application Rules: Application rules are a way to manipulate application traffic based on certain triggers. Application rules are used to create advanced load balancing rules which may not be possible with the application profile or services natively available on the Edge Gateway .
VMware NSX Documentation have some good examples for creating application rule to use with load balancer.
Tony Sangha has also demonstrated few examples to use application rules in his article
To create application rules, navigate to Application Rules page within load balancer tab and click on + button to add a new rule
Provide a name for the rule and specify the script and hit OK.
Once you have created your application rule, you associate it with your virtual server under the Advanced tab.
Reference Documents
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable