NSX for Newbies – Part 2: Home lab for NSX

Right, homNe lab, what a hot topic nowadays for IT guys and VMware fanboys ah?!
NSX requires quite a chunky lab talking on number of VMs and amount of RAM so I thought I’d be good to write an article to share my lab experience.
If you want to emulate a very minimal production scenario you would need the following:

  • 2x ESXi 5.5 hosts for Management Cluster with 6GB RAM each, 9GB of disk each;
  • 2x ESXi 5.5 hosts for a first Compute Cluster (say A) with 4GB of RAM each, 9GB disk each;
  • 2x ESXi 5.5 hosts for a second Compute Cluster (say B) and again, 4GB of RAM each, 9GB disk each.

So as you can see, just for the nested VMs you’re already using 28GB or RAM. You then need the following virtual machines for your “common services” i.e. your Management Pod (or AMP to use Vblock terminology :P)

  • 1x vCenter Server 5.5 with 5GB of RAM (strongly recommended the appliance);
  • 1x Domain Controller with DNS and perhaps DHCP. I have an Active Directory running on 2008 R2 and luckily it’s working perfectly with just 800MB of RAM and 20GB disk;
  • 1x virtual Router with 256MB of RAM, 2GB disk and a minimum of 7 interfaces. This boy will act as your L3 core switch and for this I’m using vYatta 6.1. Unfortunately vYatta isn’t available anymore since Brocade decided to acquire it back in 2012 and terminated the community (free) edition in 2013 (argh!!) but don’t panic, there’s fork version called VyOS that does pretty much the same (big thanks to the community!). I will create a separate post to show you how easy is to setup your vYatta/VyOS;
  • 1x virtual NAS with 512MB RAM and minimum of 50GB disk (I have 200GB just in case). For this I found the best fit for purpose being OpenFiler 2.3 Final (stay away from 2.99 Final because it’s so full of bugs you can’t even imagine!!
  • 1x NSX Manager with 8GB of RAM and 60GB of disk ( after the OVA deployment decrease the RAM from 12GB to 8GB)

To wrap up:

  • 28GB of RAM for nested ESXi
  • 15GB of RAM for management VMs

Pretty big ah? Don’t worry you can squeeze them all in a 32GB RAM server, thanks to TPS the compute ESXi hosts are sharing a lot of vRAM. See the following screenshots.

Compute cluster A memory sharing

Compute cluster B memory sharing

In fact in my lab I’m using a single physical Dell T310 with 32GB of RAM and 4x1TB SATA 7200rpm drives on RAID10 (2TB usable space).

With regards of the virtual ESXi, the quicker way I found to provision them is by creating a VM template configured with the following accuracies:

  1. configure a standard vSwitch with whatever you need to connect to, in my lab I only have
    • 1x VMkernel port for the management network
    • 1x VMkernel port for IP storage
    • 1x VMkernel port for vMotion
    • 1x port group for VM connectivity
  2. Set “Synchronize guest time with host” on the vESXi otherwise every time you resume the VM from suspended state you’ll have to manually fix the time

  3. do not use iSCSI, NFS is much simpler to manage and you can access the VMs files it from you laptop should you need to check stuff
  4. delete the default local VMFS datastore that gets created when ESXi installs. In my lab I’m using 9GB disk so I have 1.50GB of VMFS datastore that gets created on the last partition.
    Why delete it? Because you’ll have duplicated VMFS UUID if you don’t; therefore by removing it you can just create it manually on the deployed clone afterwards.

    ok but why 9GB? Because if you go with 8GB you’ll end up with roughly 650MB of space left on the partition and the vSphere Client won’t let you create the VMFS datastore as it needs a minimum 1.3GB of space

  5. Set the advanced ESXi option FollowHardwareMac to 1 to automatically update the VMkernel’s MAC Address whenever the Virtual Machine’s virtual network adapter MAC Addresses changes.
    esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
  6. delete the line entry /system/uuid in /etc/vmware/esx.conf and run /sbin/auto-backup.sh to save it persistently. This ensure a new system uuid gets generated at boot time

  7. Install vmware tools for nested ESXi and for this see https://labs.vmware.com/flings/vmware-tools-for-nested-esxi it’s just a VIB installation

Software Requirements

  • vCenter 5.5 or later with Web Client up and running
  • ESXi 5.0 or later
  • VMware Tools
  • IE8+ or a recent version of Chrome/Firefox

Network ports requirements

For the sake of a home lab you would normally ignore firewall ports requirements because it’s extremely likely you won’t have a firewall blocking traffic. That said, for a real environment you need:

  • TCP 443 between ESXi, vCenter Server and NSX Manager
  • TCP 443 between REST client and NSX Manager
  • TCP 902,903 between ESXi and Web Client
  • TCP 80,443 to access NSX Manager
  • TCP 22 for ssh access to NSX Manager, Controllers and ESXi hosts

Full details on requirements from the official VMware documentation System Requirements for NSX

Be sociable, share!Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someone

2 Comments

 Add your comment
  1. Did you create this post yet. “I will create a separate post to show you how easy is to setup your vYatta/VyOS;”

    If so can you point me to it. If not can you help me get started with the basics for NSX.

    I have used VyOS already, so just need the setup you used.

    Thanks,

  2. Hi, Thank for this write up. I have done much the same as you, Physical server running ESXi then 6 nested vESXi but I created the VM’s using ESXi 6 and using NSX 6.2. I am running into the issue that if I have 2 VM’s on the same logical switch on different vESXi hosts they are not able to see each other, vmotion them to the same host ping works….Any thoughts? Thanks

Leave a Comment

Your email address will not be published.

2 Trackbacks

  1. NSX Link-O-Rama | vcdx133.com (Pingback)
  2. NSX for Newbies (The Series) | blog.bertello.org (Pingback)