Topics covered in this article
- dLR concepts
- dLR deployment
- dLR deployment verification steps
dLR concepts
What is a Distributed Logical Router (dLR)?
As with traditional Cisco switches a Distributed Logical Router (DLR) is made up of two distinct elements:
- Control Plane represented by a virtual machine called Logical Router (LR) Control VM. Dynamic routing protocols such as OSPF, BGP, IS-IS run between the Control VM and the upper layer, on NSX represented by the NSX Edge Gateway.
- Data Plane (or line cards) represented by routing functionalities at the hypervisor level, which is achieved by installing kernel modules (VIB). I covered this in the post Introduction to NSX.
So where’s the deal ? Well, thinking about it, with a traditional approach the L3 traffic from the hypervisor always have to go northbound to the external router, whether that’s your physical L3 core switch or a virtual appliance running somewhere, this process is called hairpinning.
It’s a sub-optimal path we could say.
By moving the routing functionality to the hypervisor (kernel level) we are effectively removing this sub-optimal path because with DLR each ESXi host can route between L3 subnets at line rate (or nearly line rate) speed. The type of traffic dLR optimises and takes care of is VM to VM (or server to server) normally known as East-West traffic. In this logical diagram (sorry I’m not using official VMware icons) you can see what I just described.
As you can imagine in a typical (and most increasingly common) 3-tier application there is a lot of interaction between the tiers, web server to application, application to database. Therefore having an optimised path with a higher throughput is essential on modern SDN datacenters.It should be noted that DLR kernel modules are routing between VXLAN (logical) subnets, as opposed to VLANs. Brad Hedlund wrote a great article that explains the details.
LR Control VM
As mentioned before it’s the control plane and it doesn’t perform any routing so if it dies virtual machines traffic keeps going. Routes learnt are pushed to the hypervisor in a process that can be summarised as follow:
- NSX Edge Gateway (EGW) learns a new route from the “external world”
- LR Control VM learns this route because it’s a “neighbor” (adjacency) talking to EGW via the Protocol Address
- LR Control VM pass the new route to the NSX Controller(s)
- NSX Controller pushes (in a secure manner) the new route to the ESXi hosts via User World Agent (UWA) and the route gets installed on every host
Logical Interfaces (LIFs)
From the diagram you can see the dLR has several Logical Interfaces.
- Internal LIFs which act as default gateway for each logical switch (web, app, db)
- Uplink LIF connects the “northbound world” for north-south traffic.
LIFs are distributed to all ESXi hosts with the same IP address and every host maintain an ARP table for every connected LIF.
DLR Deployment
Distributed Logical Router configuration
From NSX Edges > + symbolI like to have SSH access so I’m enabling it SSH because I like to 🙂
Select the destination for the virtual machine
Management Interface Configuration: is not a LIF, it’s local to the Control VM and does not require an IP address assigned and even if you configure one you wouldn’t be able to reach it via a routing protocol because there’s Reverse Path Forwarding (RPF) enabled. Dmitri Kalintsev wrote a good article that explaining more in detail this concept.
Configure interfaces on this NSX Edge: using the + symbol repeat the wizard until you have created all the LIFs required on the environment
Next and optional, configure a default gateway for the dLR. This typically would be the EGW ip address.
Next review and finish.
This screenshot actually has a typo, the Management Interface should be connected to Mgmt_vDS_Mgmt and not Mgmt_vDS_vMotion
At the end of the deployment you should see all the network adapters connected.
In my environment I’m running NSX 6.1.1 and although I can see all the LIFs ip addresses assigned to the Control VM the Network adapters aren’t actually listed.
I’m not sure if this is an expected behaviour of 6.1 or a bug but I clearly remember that on 6.0 all the network adapters were listed.
dLR deployment verification/troubleshooting steps
From NSX Controller run
show control-cluster logical-routers instance all
This commands shows all the dLR instances and the corresponding hosts (VTEPs) that have joined.
This is VERY useful when something like routes are not propagated from the Control VM to the hosts. I had this problem where I was configuring static routes on the Control VM for north-south connectivity and they were not pushed to hosts, hence the virtual machines could not talk north of the Edge Gateway.
From this screenshot host 192.168.110.52 is missing. It so happens that the NSX Controller and the LR Control VM were running on 192.168.110.52
In this situation and to confirm that this was actually the root of my problem, I migrated all the virtual machines running on 192.168.110.52 to 192.168.110.51 and BOOM! the static I set on the Control VM immediately appeared on the ESXi routing table.
This means something is wrong/needs fixing on the host!
How do you check the ESXi routing table ?
1) From ESXi, find the dLR name with the command
net-vdr -l -I
2) Check the routing table installed on the host coming from that dLR
net-vdr -l --route CloudLab+edge-6
Running this command on the problematic host mentioned above generates an empty routing table as follow:
From the NSX Controller
show control-cluster logical-switches vni 5001 show control-cluster logical-switches mac-table 5001 show control-cluster logical-switches arp-table 5001
The first command list all the VTEPs that have joined vni 5003 (in my case being the Web tier)
The second list the mac addresses of the virtual machines that have joined vni 5003 and are powered-on
The third command same as previous but shows the arp and so the IP resolution.
In the next post I’m going to cover the NSX Edge Gateway. Stay tuned!
15 Trackbacks