In this post I’m going to cover::
- NSX Load balancer concepts
- NSX Load balancer modes
- NSX Load balancer configuration – Scenario 1
NSX Load Balancer concepts
NSX Edge Services Gateway can do Load Balancing (as vShield Edge could) and specifically we’re talking about Local load balancing, not Global load balancing.
Global Load Balancing (GLB) describes a range of technologies to distribute resources around the Internet for various purposes, probably the most widely known is DNS Global load balancing. Now as I said earlier, NSX doesn’t do GLB going into the details is really out of scope so I won’t.
On the other hand, Local Load Balancing (LLB) is the capability provided by the NSX-v ESG to distribute multiple traffic up to layer 4 over multiple destination servers, in such a way that the actual distribution is transparent to the users. Features included are:
- TCP, HTTP, HTTPS with stateful high availability
- Multiple VIP addresses, each with separate server pool and configurations
- Multiple load balancing algorithms and session persistence methods
- Configurable health checks
- Application rules
- SSL termination with certificate management, SSL pass-through and SSL initiation
- IPv6 support
- Support for UDP application as of NSX 6.1.x
- L7 manipulation, including URL block, URL rewrite and content rewrite.
- External LB integration with 3rd parties (F5, Brocade for example)
NSX Load Balancer Modes
Two operating modes are available:
Proxy Mode (One-Arm)
It’s the easiest and quickest way to deploy. One interfaces is used to advertise the VIP and to connect to the pool of servers.
- Traffic is sent from the clients to the VIP
- Two network address translations (NAT) are performed: D-NAT which replaces the VIP with one of the servers in the pool; secondly a S-NAT which replaces the client address (source) with an IP address on the same subnet of the VIP.
- the server replies to the translated IP address on the NSX ESG LB
- the LB performs again a S-NAT and a D-NAT to return the traffic to the external client, using as source IP the VIP
Because one interface is used, the LB has to be on same segment of the servers that are load balanced; this implementation requires a load balancer deployment for each logical switch (VNI) that you want to load balance. The source IP is not preserved, the only option to do so is with HTTP traffic using a function called “Insert X-Forwarded For HTTP header” which basically inserts the original IP address of the client into the HTTP header before performing SNAT. Again, this only for for HTTP traffic.
Transparent Mode (Inline)
- the client sends traffic to the VIP
- the load balancer performs only DNAT to replace the VIP with one of the destination ip address taken from a pool
- the destination server replies to the original client IP address.
- this traffic is received by the load balancer which performs SNAT to reply back to the external client
With this topology one interface is used for the VIP and another to connect to the logical switch to be balanced. Client IP address is preserved. The downside of this implementation is that it forces the load balancer to serve as default gateway for the logical segment subject of the load balancing, which in other words means distributed routing can’t be used for these VNIs.
NSX Load Balancer Configuration – Scenario 1
In the following step we’re going to configure an Inline load balancer for some web servers that have built-in SSL certificates. We will sniff the traffic from the ESG and we’ll see the differences in the source packets when enabling/disabling Transparent mode.
In the Perimeter ESG add a new interface on the northbound interface (here represented by the HQ Access name). Click Edit
Select the existing IP addresses and click Edit again
Click on + and add the new IP address that will be used by the VIP (192.168.100.7 here)
Click OK three times to close the interface configuration windows. You should now see the new ip address listed
Go to “Load Balancer” tab > Global Configuration > Edit
Activate “Enable Load Balancer” and click OK to close the window.
Configure an Application Profile
Move to Application Profiles > + symbol and create a new profile that uses SSL Passthrough. This basically means that the destination servers must have an existing SSL certificates installed
Create a Server Pool
Go to Pool > + symbol and add a new server pool (here named
under Member click the + symbol and add the web servers
repeat the same step to add web-sv-02a.
Do not enable Transparent mode. Once you’re done you should see the following configuration
Clicking on “Show Pool Statistics” should display the 2 servers as UP
Create a Virtual Server (VIP)
Move to “Virtual Server” > + symbol and configure a new VIP to use the previously created Application Profile, ESG IP address and Server Pool
Point your browser to the VIP (for me 192.168.100.7) accept the SSL certificate and your should get a response from one of the two servers in the pool
here I’m using a simple html page which you could also use as Load Balancer health check URL in the Service Monitor configuration
SSH into the Perimeter ESG and verify how the traffic is load balanced.
Using the command show interface find the vNic interface number.
Start capturing traffic using the command
debug packet display interface vNic_0 port_443
We can see conversation happening between my client 192.168.10.1 (the ESG Uplink interface) and the destination server (172.16.10.12).
Nowhere the client source IP (192.168.110.10) is visible.
To be even more precise you could use
debug packet display interface vNic_0 port_443_and_host_192.168.110.10
and after a reload from the browser you should see nothing on the logs. This is because the LB is not operating in Transparent mode so the source IP has undergone a NAT operation and is visible as source 192.168.10.1
Now go back to Server Pool and enable Transparent mode
Return to the CLI window and you should expect to see many hits for 192.168.110.10, the actual source IP generating the request.