So we have NSX Manager deployed and running. It’s now time to prepare the compute cluster(s) for network virtualization.
Summarising what we need to complete:
- Deploy the NSX Controller(s)
- Host preparation: that is, install the NSX vib modules into the cluster(s)
- Configure VXLAN VTEP IP Pools
- Configure Segment ID pool
- Configure Global Transport Zone
- Configure Logical Switch Networks
In case you missed my post NSX for Newbies – Part 1: Introduction to NSX for vSphere I suggest you have a look to refresh your mind on what the NSX Controller is responsible for.
Deploy NSX Controllers
NSX Controller system requirements
- RAM: 4GB
- Disk: 20GB
- vCPU: 4
From vSphere Web Client: Networking&Security > Installation > Management tab > under NSX Controller nodes click on the + symbol
You need to define a pool of ip addresses that will automatically be allocated to the controllers; you can create your first IP Pool from the scroll-down menu IP Pool and select Add. In my lab I’m defining the pool as following:
Name: Controller-Pool
Gateway: 192.168.110.254/24
DNS: 192.168.110.50
Static IP Pool: 192.168.110.201-192.168.110.210
Once the first controller has been deployed and it’s ready you can repeat the process to deploy the second and third controller, the only difference is you don’t need to create the ip pool, simply select the one you previously created.
IMPORTANT: Resizing the controller VM settings
Because I’m running the lab nested I can’t afford the default VM settings for the controller and I need to resize as follow:
– RAM 4GB to 2GB
– vCPU 2 to 1
– CPU reservation from 2000MHz to 200MHz
You cannot modify the virtual machine settings by default as the vm is protected by NSX Manager. Simply unregister/re-register the vm would remove the protection but at the same time would case a new Virtual Machine Inventory ID (Vmid) to be generated, which is not good and could lead to problems. So what I did was:
– power-off the vm
– edit the .vmx file using vi from ESXi CLI and change vRAM, vCPU
– get the Vmid using vim-cmd vmsvc/getallvms
– reload the vm service using vim-cmd vmsvc/reload Vmid
You can also explore the Managed Object Reference (MoRef) database by pointing your browser at http://<esxi-host>/mob/?moid=ha-host
For additional details see this two KB articles:
Reloading a vmx file without removing the VM from inventory
Managed Object Reference (MoRef) lookup
No workaround is required to change the cpu reservation.
Host Preparation
Again, by host preparation we means to install the NSX VIB modules into ESXi. This is simply accomplished on a cluster basis by going to Installation > Host Preparation tab. Here you will see the list of cluster and under the column Installation Status you will see Install meaning that it hasn’t been done already.
Click Install on all the cluster you need (here I have Management, Compute A and B). As usual you will see a lot of task running from vSphere Client.
If you run esxcli software vib get post installation the following should be present:
– esx-dvfilter-switch-security
– esx-vsip
– esx-vxlan
Configure VXLAN VTEP IP Pools
Next, we need to configure VXLAN and we can do so by clicking on Configure under the VXLAN column.
This will kick-off another wizard which will ask you to create another IP Pool for the VTEPs, and because I have two compute clusters (A and B) and one management cluster I’m defining the following VTEP IP pools:
Name: VTEP-Pool-1 Gateway: 192.168.250.254 Static IP Pool: 192.168.250.51-192.168.250.60 Applied to Compute Cluster A and B |
Name: VTEP-Pool-2 Gateway: 192.168.150.254 Static IP Pool: 192.168.150.51-192.168.150.60 Applied to Management Cluster |
The following screenshot shows the VXLAN settings that will be applied to Compute Cluster A and B.
The following screenshot shows the VXLAN settings that will be applied to the Management Cluster
Configure Segment ID pool
Nearly there!
Next, we create the Segment ID pool under Installation > Logical Network Preparation > Segment ID
This is also known as VNI (VXLAN Network Identifier) and by defining a range NSX Manager will automatically allocate a “VXLAN tag” by which packets will be encapsulated (MAC in UDP remember?)
Configure Global Transport Zone
A transport zone defines members of the VXLAN overlay network (VTEPs) and can include hosts from different vSphere clusters and generally one is sufficient.
It tells the host which logical switch has been created. On a multi-tenancy shared infrastructure you can segregate logical switches by creating (joining) different hosts to different transport zones. At minimum you will always need one transport zone
From Installation > Logical Network Preparation > Transport Zones
I am using Unicast on my nested lab
As you can see we have 3 different Control Plane modes also known as VXLAN replication modes. This tells the NSX Controllers how to handle Broadcast, Unknown Unicast and Multicast traffic (BUM)
- Multicast: requires IGMP snooping for a L2 topology and multicast routing for a L3 topology (Protocol Independent Multicast aka PIM)
- Unicast: all the replication happens using unicast traffic, no requirements on the physical switches, more overhead for the VTEPs
- Hybrid: local replication offloaded to the physical network (IGMP query&report) and remote replication using unicast
If you’re not familiar with how VXLAN works have a read on the VXLAN Series by Vyenkatesh Deshpande at blogs.vmware.com
Configure Logical Switch Networks
So that’s it, we are ready to start creating the NSX logical switch networks, see the next post NSX for Newbies – Part 5: Configure Logical Switch Networks
.
8 Trackbacks