So we have NSX Manager deployed and running. It’s now time to prepare the compute cluster(s) for network virtualization.
Summarising what we need to complete:
- Deploy the NSX Controller(s)
- Host preparation: that is, install the NSX vib modules into the cluster(s)
- Configure VXLAN VTEP IP Pools
- Configure Segment ID pool
- Configure Global Transport Zone
- Configure Logical Switch Networks
In case you missed my post NSX for Newbies – Part 1: Introduction to NSX for vSphere I suggest you have a look to refresh your mind on what the NSX Controller is responsible for.
Deploy NSX Controllers
NSX Controller system requirements
- RAM: 4GB
- Disk: 20GB
- vCPU: 4
From vSphere Web Client: Networking&Security > Installation > Management tab > under NSX Controller nodes click on the + symbol
You need to define a pool of ip addresses that will automatically be allocated to the controllers; you can create your first IP Pool from the scroll-down menu IP Pool and select Add. In my lab I’m defining the pool as following:
Static IP Pool: 192.168.110.201-192.168.110.210
Once the first controller has been deployed and it’s ready you can repeat the process to deploy the second and third controller, the only difference is you don’t need to create the ip pool, simply select the one you previously created.
IMPORTANT: Resizing the controller VM settings
Because I’m running the lab nested I can’t afford the default VM settings for the controller and I need to resize as follow:
– RAM 4GB to 2GB
– vCPU 2 to 1
– CPU reservation from 2000MHz to 200MHz
You cannot modify the virtual machine settings by default as the vm is protected by NSX Manager. Simply unregister/re-register the vm would remove the protection but at the same time would case a new Virtual Machine Inventory ID (Vmid) to be generated, which is not good and could lead to problems. So what I did was:
– power-off the vm
– edit the .vmx file using vi from ESXi CLI and change vRAM, vCPU
– get the Vmid using vim-cmd vmsvc/getallvms
– reload the vm service using vim-cmd vmsvc/reload Vmid
You can also explore the Managed Object Reference (MoRef) database by pointing your browser at http://<esxi-host>/mob/?moid=ha-host
For additional details see this two KB articles:
Reloading a vmx file without removing the VM from inventory
Managed Object Reference (MoRef) lookup
No workaround is required to change the cpu reservation.
Again, by host preparation we means to install the NSX VIB modules into ESXi. This is simply accomplished on a cluster basis by going to Installation > Host Preparation tab. Here you will see the list of cluster and under the column Installation Status you will see Install meaning that it hasn’t been done already.
If you run esxcli software vib get post installation the following should be present:
Configure VXLAN VTEP IP Pools
Next, we need to configure VXLAN and we can do so by clicking on Configure under the VXLAN column.
This will kick-off another wizard which will ask you to create another IP Pool for the VTEPs, and because I have two compute clusters (A and B) and one management cluster I’m defining the following VTEP IP pools:
Static IP Pool: 192.168.250.51-192.168.250.60
Applied to Compute Cluster A and B
Static IP Pool: 192.168.150.51-192.168.150.60
Applied to Management Cluster
The following screenshot shows the VXLAN settings that will be applied to Compute Cluster A and B.
The following screenshot shows the VXLAN settings that will be applied to the Management Cluster
Configure Segment ID pool
Next, we create the Segment ID pool under Installation > Logical Network Preparation > Segment ID
This is also known as VNI (VXLAN Network Identifier) and by defining a range NSX Manager will automatically allocate a “VXLAN tag” by which packets will be encapsulated (MAC in UDP remember?)
Configure Global Transport Zone
A transport zone defines members of the VXLAN overlay network (VTEPs) and can include hosts from different vSphere clusters and generally one is sufficient.
It tells the host which logical switch has been created. On a multi-tenancy shared infrastructure you can segregate logical switches by creating (joining) different hosts to different transport zones. At minimum you will always need one transport zone
From Installation > Logical Network Preparation > Transport Zones
I am using Unicast on my nested lab
- Multicast: requires IGMP snooping for a L2 topology and multicast routing for a L3 topology (Protocol Independent Multicast aka PIM)
- Unicast: all the replication happens using unicast traffic, no requirements on the physical switches, more overhead for the VTEPs
- Hybrid: local replication offloaded to the physical network (IGMP query&report) and remote replication using unicast
Configure Logical Switch Networks
So that’s it, we are ready to start creating the NSX logical switch networks, see the next post NSX for Newbies – Part 5: Configure Logical Switch Networks