VMware Cloud Foundation (VCF) 3.9.1 includes the following updated Bill of Materials (BOM):
- vSphere + ESXi + vSAN are 6.7 Update 3b
- NSX-V is 6.4.6
- SDDC Manager is 3.9.1
All the remaining components are at the same level as VCF 3.9.
In terms of new capabilities I would like to focus on the following major enhancements:
- Application Virtual Networks (AVN)
- Support for multiple physical NICs (pNICs) and multiple vSphere Distributed Switches (VDS)
- L3 networking support on VCF on VxRail
Application Virtual Networks (AVN)
What’s an AVN? An AVN it’s a construct from VMware Validated Design, in this case it’s referring to vRealize Suite applications deployed and connected to virtual wires, or NSX logical switch. So an application virtual network still is a L2 broadcast domain backed by VXLAN (in NSX-V). Wait a minute… this isn’t new is it? No, it’s not. What’s new is how VCF is deploying the vRealize Suite products.
Let’s have a look at the following diagram:
As you can in this design there is support for stretched networks on the management cluster applications, such as vRealize Automation (vRA), vRealize Suite Lifecycle Manager (vRLSCM), vRealize Operatons Manager (vROps). The idea is that if you have multiple sites(or regions) deployed (and configured with Cross Site NSX) you will be able to fail-over the management applications from Site-A to Site-B without the need to re-IP or messing around with DNS or even application reconfiguration. That’s because their vNICs are connected to an NSX universal logical switch therefore the L2 segment is stretched.
The diagram seems to suggest that Region A logical switch (attached to vRLI, which is “local” to Site A) is attached to UDLR. It is not possible to attach a Logical Switch (LS) to a Universal DLR (UDLR); it would have to be Universal Logical Switch (ULS). What is depicted there is referring to the fact that Region A LS is not protected by SRM hence can’t be failed-over to a secondary site, even though the L2 segment is already stretched across sites.
During bring-up VCF can take care of the following:
- Deployment of two ECMP ESGs for N/S routing
- BGP configuration for dynamic routing (static routing is not supported but you can add static routes to your L3 core though it’s not for production)
- Deployment of two NSX logical switches, one attached to a local transport zone (TZ) for the local site/region and one attached to a universal TZ for cross region
- Deployment of NSX UDLR for E/W traffic
However: products such as vRealize Log Insight (vRLI) and vRA Proxy Agents by design are deployed in “site local” mode; what I mean by that is that their vNICs are connected to logical switches not a universal logical switches.
One important note is that a green field deployment of VCF 3.9.1 uses AVNs for vRealize Suite components but if you upgrade from 3.9.0 the vRealize Suite components will still be deployed on a VLAN-backed distributed port group. If you really want to migrate to AVNs you will need to engage VMware GSS for the time being because the migration has not been automated so there’s a lot of manual steps involved.
The message you should be getting from VMware with VCF 3.9.1 is that you need to start planning the migration from VLAN backed port groups to overlay networks (VXLAN backed in the NSX-V case) and this will greatly simplify complexity ultimately giving you the best software defined data center experience and capabilities.
Prerequisites for 3.9.1 are:
- BGP pre-configured on the top of the rack switches (TORs)
- use of a new deployment parameters excel spreadsheet
Multiple Physical NICs
Using the APIs (only option for now) it’s possible to configure multiple physical NICs, up to 6 and the combinations are:
- Management Workload Domain: 2x VDS with 2 NICs each (4 total)
- Workload Domain NSX-V: 2x VDS with 2 NICs each (4 total)
- Workload Domain NSX-T
- 1x VDS & 1x N-VDS each with 2 NICs (4 total)
- 2x VDS & 1x N-VDS each with 2 NICs (6 total)
These configurations are applicable only to new workload domains and it is not possible to upgrade existing domains to a multi-nic setup.
VCF on VxRail L3 Networking Support
With VCF 3.9.1 on VxRail it is now possible to stretch networks across L3. This is true for vSAN, vMotion as well as workload VLANs which can be in different VLANs in each site. A proxy host is needed for VxRail Manager to be able to discover “remote” hosts. Stretching a cluster is performed from the SOS utility in SDDC Manager, there’s no GUI support as I write this article.