5. VXLAN & Logical Switch Deployment

Next: 6. NSX Distributed Logical Router ->

Ok, this is the part 5 of the series where we are looking at the VXLAN & Logical switch configuration

  • VXLAN Architecture – Key Points

    • VXLAN – Virtual Extensible LAN
      • A formal definition can be found here but put it simply, its an extensible overlay network that can deploy a L2 network on top of an existing L3 network fabric though encapsulating a L2 frame inside a UDP packet and transfer over an underlying transport network which could be another L2 network or even across L3 boundaries. Kind of similar to Cisco OVT or even Microsoft NVGRE for example. But I’m sure many folks are aware of what VXLAN is and what and where its used for already.
      • VXLAN encapsulation adds 50 bytes to the original frame if no VLAN’s used or 54 bytes if the VXLAN endpoint is on a VLAN tagged transport network   0.1 VXLAN frame
      • Within VMware NSX, this is the primary (and only) IP overlay technology that will be used to achieve L2 adjacency within the virtual network
      • A minimum MTU of 1600 is required to be configured end to end, in the underlying transport network as the VXLAN traffic sent between VTEPs (do not support fragmentation) – You can have MTU 9000 too
      • VXLAN traffic can be sent between VTEPs (below) in 3 different modes
        • Unicast – Default option that is supported with vSphere 5.5 and above. This has a slightly more overhead on the VTEPs 0.2 VXLAN Unicast
        • Multicast – Supported with ESXi 5.0 and above. Relies in the Multicasting being fully configured on the transport network with L2-IGMP and L3-PIM 0.3 VXLAN Multicast
        • Hybrid – Unicast for remote traffic and Multicast for local segment traffic.0.4 VXLAN Hybrid
      • Within NSX, VXLAN is an overlay network only between ESXi hosts and VM’s have no information of the underlying VXLAN fabric.

     

    • VNI – VXLAN Network Identifier (similar to a VLAN ID)
      • Each VXLAN network identified by a unique VNI is an isolated logical network
      • Its a 24 bit number that gets added to the VXLAN frame which allows a theoretical limit of 16 million separate networks (but note that in NSX version 6.0, the only supported limit is 20,000 NOT 16 million as VMware marketing may have you believe)
      • The VNI uniquely identifies the segment that the inner Ethernet frame belongs to
      • VMware NSX VNI range starts from 5000-16777216

     

    • VTEP – VXLAN Tunnel End Point
      • VTEP is the end point that is responsible for encapsulating the L2 Ethernet frame in a VXLAN header and forward that on to the transport network as well as the reversal of that process (receive an incoming VXLAN frame from the transport network, strip off everything and just forward on the original L2 Ethernet frame to the virtual network)
      • Within NSX, a VTEP is essentially a VMkernal port group that gets created on each ESXi server automatically when you prepare the clusters for VXLAN (which we will do later on)
      • A VTEP proxy is a VTEP (specific VMkernal port group in a remote ESXi server) that receive VXLAN traffic from a remote VTEP and then forward that on to its local VTEPs (in the local subnet). The VTEP proxy is selected by the NSX controller and is per VNI.
        • In Unicast mode – this proxy is called a UTEP
        • In Multicast or Hybrid mode – this proxy is called MTEP

     

    • Transport Zone
      • Transport zone is a configurable boundary for a given VNI (VXLAN network segment)
      • Its like a container that house NSX Logical Switches created along with their details that present them to all hosts (across all clusters) that are configured to be a part of that transport zone (if you want to restrict certain hosts from seeing certain Logical switches, you’d have to configure multiple Transport Zones)
      • Typically, a single Transport Zone across all you vSphere clusters (managed by a single vCenter) is sufficient.
  •  Logical Switch Architecture – Key Points

    • Logical Switch within NSX is a virtual network segment which is represented on vSphere via a distributed port group tagged with a unique VNI on a distributed switch.
    • Logical Switch can also span distributed switch by associating with a port group in each distributed switch.
    • VMotion is supported amongst the hosts that are part of the same vDS.
    • This distributed port group is automatically created when you add a Logical Switch, on all the VTEPs (ESXi hosts) that are part of the same underlying Transport Zone.
    • A VM’s vNIC then connects to each Logic Switch as appropriate.

 

 VXLAN Network Preparation

  1. First step involved is to prepare the hosts. (Host Preparation)
    1. Launch vSphere web client, go to Netwrking & Security -> Installation (left) and click on Host preparation tab 1. Host prep 1
    2.  Select the Compute & Edge clusters where the NSX controllers & compute & edge VM’s reside where you need to enable VXLAN and click install. 1. Host prep 2
    3. While the installation is taking place, you can monitor the progress of vSphere web / c# client 1. Host prep 3
    4. During this host preparation, 3 VIB’s will be installed on the ESXi servers (as mentioned in the previous post of this series) & you can notice this in vSphere client.   1. Host prep 4
    5. Once complete, it will be shown as Ready under the installation status as follows 1. Host prep 5
  2. Configure the VXLAN networking
    1. Click configure within the same window as above (Host Preparation Tab) under VXLAN. This is where you configure the VTEP settings including the MTU size and if the VTEP connectivity is happening through a dedicated VLAN within the underlying transport network.  Presumably this is going to be common as you still have your normal physical network for all other things such as VMotion network (vlan X), storage network (vlan Y)…etc.
      1. In my example I’ve configured a dedicated VXLAN VLAN in my underlying switch with the MTU size set to 9000 (this could have been slightly more than 1600 for VXLAN to work).
      2. In a corporate / enterprise network, ensure that this vlan has the correct MTU size specified and also in any other vlans that the remote VTEP are tagged with. The communication between all VTEPs across vlans need to have at least MTU 1600 end to end.
      3. You also create an IP pool for the VTEP VMkernal port group to use on each ESXi host. Ensure that there’s sufficient capacity for all the ESXi hosts. 2. Co0nfigure VXLAN networking
    2. Once you click ok, you can see on vSphere client / web client the creation of the VTEP VMkernal portgroup with an IP assigned from the VTEP Pool defined along with the appropriate MTU size (Note that I’ve only set MTU 1600 in the previous step but it seems to have worked out that my underlying vDS and the physical network is set to MTU 9000 and used that here automatically). 3. VXLAN networking prep check
    3. Once complete, VXLAN will appear as complete under the host preparation tab as follows 4. VXLAN Enabled
    4. In Logical Network Preparation tab, you’ll notice that the VTEP vlan and the MTU size with all the VMkernal IP assignment for that VXLAN transport network as show below 5. VXLAN Transport enabled
  3. Create a segment ID (VNI pool) – Go to Segment ID and provide a range for a pool of VNI’s 6. VXLAN Segment ID pool (VNI)
  4. No go to the Transport Zone tab and create a Global transport zone.  7. VXLAN Global Transport Zone
  5. The logical switch is created using the section on the left and create a logical switch. Provide a name and the transport zone and select the Multicast / Unicast / Hybrid mode as appropriate (my example uses Unicast mode).
    1. Enable IP discovery: Enables the ARP suppression available within NSX. ARP traffic is generated as a broadcast in a network when the destination IP is known but not the MAC. However within NSX, the NSX controller maintains and ARP table which negates the use of ARP broadcast traffic.
    2. Enabled MAC learning: Useful if the VM’s have multiple MAC addresses or using vmnics with trunking. Enabling MAC Learning builds a VLAN/MAC pair learning table on each vNic. This table is stored as part of the dvfilter data. During vMotion, dvfilter saves and restores the table at the new location. The switch then issues RARPs for all the VLAN/MAC entries in the table.  8. Add logical Switch
  6. You can verify the creation of the associated port group by looking at the vSphere client. 10. Check the port group creaton
  7. In order to confirm that your VTEPs (behind the logical switches) are fully configured and can communicate with one another, you can double click the logical switch created, go to monitor and do a ping test using 2 ESXi servers. Note that the switch ports where the uplink NIC adaptors plugged in to need to be configured for appropriate MTU size (as shown)10.1 Verify the VTEP communication
  8. You can create multiple Logical Switches as required. Once created, select the logical switch and using Actions menu, select add VM to migrate a VM’s networking connectivity to the Logical switch.  11. Add VM step 1 12. Add VM step 2 13. Add VM step 3

 

There you have it. your NSX environment now has Logical switches that all your existing and new VM’s should be connecting to instead of standard or distributed switches.

As it stands now, these logical networks are somewhat unusable as they are isolated bubbles and the traffic cannot go outside of these networks. Following posts will look at introducing NSX routing using DLR – Distributed Logical Routers to route between different NXLAN networks (Logical switches) and introducing Layer 2 bridging to enable traffic within the Logical Switch network to communicate with the outside world.

Thanks

Chan

 

Next: 6. NSX Distributed Logical Router ->

1. Brief Introduction to NSX

Next: How to gain access to NSX media ->

NSX is the next evolution of what used to be known as vCloud Networking and Security suite within the VMware’s vCloud suite – A.K.A vCNS (now discontinued) which in tern, was an evolution of the Nicira business VMware acquired a while back. NSX is how VMware provides the SDN (Software Defined networking) capability to the Software Defined Data Center (SDDC). However some may argue that NSX primarily provide a NFV (Network Function Virtualisation) function which is slightly different to that of SDN.

The current version of NSX available comes in 2 forms

  1. NSX-V : NSX for vSphere – This is the most popular version of NSX and is what appears to be the future of NSX. NSX-V is inteneded to be used by all existing and future vSphere users alongside their vSphere (vCenter and ESXi) environment. All the contents of the rest of this post and all my future posts within this blog are referring to this version of NSX and NOT the multi hypervisor version.
  2. NSX-MH : NSX for multi hypervisors is a special version of NSX that is compatible with other hypervisors outside of just vSphere. Though it suggests multi- hypervisors in the name, actual support (as of the time of writing) is limited and is primarily aimed at offering networking and security to OpenStack (Linux KVM) rather than all other hypervisors (currently supported hypervisors are XEN, KVM & ESXi). Also, the rumour is that VMware are phasing NSX-MH out anyway which means all if not most future development and integration efforts would likely be focused around NSX-V. However if you are interested in NSX-MH, refer to the NSX-MH design guide (based on the version 4.2 at the time of  writing) which seems pretty good.

Given below is a high level overview of the architectural differences between the 2 offerings.

1. Differnces between V & MH

NSX-V

NSX-V, or as commonly referred to as NSX, provide a number of features to a typical vSphere based datacentre

2. NSX features

NSX doesn’t do any physical packet forwarding and as such, doesn’t add anything to the physical switching environment. It only exist in the ESXi environment and independent (theoretically speaking) of the underlying network hardware. (Note that NSX however is reliant on a properly designed network in a spine and leaf architecture and require support for MTU > 1600 within the underlying physical network).

  • NSX virtualises Logical Switching:- This is a key feature that enables the creation of a VXLAN overlay network with layer 2 adjacency over an existing, legacy layer 3 IP network. As shown in the diagram below, a layer 2 connectivity between 2 VM’s on the same host never leaves the hypervisor and the end to end communication all takes place in the silicon.  Communication between VM’s in different hosts still has to traverse the underlying network fabric however, compared to before (without NSX), the packet switching is now done within the NSX switch (known as the Logical switch). This logical switch is a dvPort group type of construct added to an existing VMware distributed vSwitch during the installation of NSX

3. Logical Switching

  • NSX virtualises logical routing:- NSX provides the capability to deploy a logical router which can route traffic between different layer 3 subnets without having to physical be routed using a physical router. The diagram below shows how NSX virtualise the layer 3 connectivity in different IP subnets and logical switches without leaving the hypervisor to use a physical router. Thanks to this, routing between 2 VMs in 2 different layer 3 subnets in the same host would no longer require the traffic to be routed by an external, physical router but instead, routed within the same host using the NSX software router allowing the entire transaction to all occur in the silicon. In the past, a VM1 on a port group tagged with vlan 101 on host A, talking to VM2 on a port group tagged with vlan 102 on the same host would have required the packet to be routed using an external router (or a switch with Layer 3 license) that both uplinks / vlans connects to. With NSX, this is no longer required and all routing, weather VM to VM communication in the same host or between different hosts will all be routed using the software router.

4. Logical Routing

 

  • NSX REST API:-  The built in REST API provide the programmatically access to NSX by external orchestration systems such as VMware vRealize Automation (vCAC). This programmatically access provide the ability to automate the deployment of networking configurations, that can now be tied to application configurations, all being deployed automatically on to the datacentre.

5. Programmatical access

  • NSX Logical Firewall:-  The NSX logical firewall introduces a brand new concept of micro segmentation where, put simply, through the use of a ESXi kernel module driver, un-permitted traffic are blocked at the VM’s vmnic driver level so that the packets are never released in to the virtual network. No other SDN / NFV solution in the market as of now is able to provide this level of micro segmentation (though Cisco ACI is rumoured to bring this capability to ACI platform through the use of the Appliance Virtual Switch).  The NSX logical firewall provide the East-West traffic filtering through the distributed firewall while North-South filtering is provide through the NSX Edge services gateway. The Distributed firewall also allows the capability to integrate with advanced 3rd party layer 4-7 firewalls such as Palo-Alto network firewalls.

6. Firewalls

There are many other benefits of NSX all of which cannot be discussed within the scope of this article. However the above should provide you with a  reasonable insight in to some of the most notable and most discussed benefits of NSX.

Next: How to gain access to NSX media ->

Cheers

Chan