5. VXLAN & Logical Switch Deployment

Next: 6. NSX Distributed Logical Router ->

Ok, this is the part 5 of the series where we are looking at the VXLAN & Logical switch configuration

  • VXLAN Architecture – Key Points

    • VXLAN – Virtual Extensible LAN
      • A formal definition can be found here but put it simply, its an extensible overlay network that can deploy a L2 network on top of an existing L3 network fabric though encapsulating a L2 frame inside a UDP packet and transfer over an underlying transport network which could be another L2 network or even across L3 boundaries. Kind of similar to Cisco OVT or even Microsoft NVGRE for example. But I’m sure many folks are aware of what VXLAN is and what and where its used for already.
      • VXLAN encapsulation adds 50 bytes to the original frame if no VLAN’s used or 54 bytes if the VXLAN endpoint is on a VLAN tagged transport network   0.1 VXLAN frame
      • Within VMware NSX, this is the primary (and only) IP overlay technology that will be used to achieve L2 adjacency within the virtual network
      • A minimum MTU of 1600 is required to be configured end to end, in the underlying transport network as the VXLAN traffic sent between VTEPs (do not support fragmentation) – You can have MTU 9000 too
      • VXLAN traffic can be sent between VTEPs (below) in 3 different modes
        • Unicast – Default option that is supported with vSphere 5.5 and above. This has a slightly more overhead on the VTEPs 0.2 VXLAN Unicast
        • Multicast – Supported with ESXi 5.0 and above. Relies in the Multicasting being fully configured on the transport network with L2-IGMP and L3-PIM 0.3 VXLAN Multicast
        • Hybrid – Unicast for remote traffic and Multicast for local segment traffic.0.4 VXLAN Hybrid
      • Within NSX, VXLAN is an overlay network only between ESXi hosts and VM’s have no information of the underlying VXLAN fabric.


    • VNI – VXLAN Network Identifier (similar to a VLAN ID)
      • Each VXLAN network identified by a unique VNI is an isolated logical network
      • Its a 24 bit number that gets added to the VXLAN frame which allows a theoretical limit of 16 million separate networks (but note that in NSX version 6.0, the only supported limit is 20,000 NOT 16 million as VMware marketing may have you believe)
      • The VNI uniquely identifies the segment that the inner Ethernet frame belongs to
      • VMware NSX VNI range starts from 5000-16777216


    • VTEP – VXLAN Tunnel End Point
      • VTEP is the end point that is responsible for encapsulating the L2 Ethernet frame in a VXLAN header and forward that on to the transport network as well as the reversal of that process (receive an incoming VXLAN frame from the transport network, strip off everything and just forward on the original L2 Ethernet frame to the virtual network)
      • Within NSX, a VTEP is essentially a VMkernal port group that gets created on each ESXi server automatically when you prepare the clusters for VXLAN (which we will do later on)
      • A VTEP proxy is a VTEP (specific VMkernal port group in a remote ESXi server) that receive VXLAN traffic from a remote VTEP and then forward that on to its local VTEPs (in the local subnet). The VTEP proxy is selected by the NSX controller and is per VNI.
        • In Unicast mode – this proxy is called a UTEP
        • In Multicast or Hybrid mode – this proxy is called MTEP


    • Transport Zone
      • Transport zone is a configurable boundary for a given VNI (VXLAN network segment)
      • Its like a container that house NSX Logical Switches created along with their details that present them to all hosts (across all clusters) that are configured to be a part of that transport zone (if you want to restrict certain hosts from seeing certain Logical switches, you’d have to configure multiple Transport Zones)
      • Typically, a single Transport Zone across all you vSphere clusters (managed by a single vCenter) is sufficient.
  •  Logical Switch Architecture – Key Points

    • Logical Switch within NSX is a virtual network segment which is represented on vSphere via a distributed port group tagged with a unique VNI on a distributed switch.
    • Logical Switch can also span distributed switch by associating with a port group in each distributed switch.
    • VMotion is supported amongst the hosts that are part of the same vDS.
    • This distributed port group is automatically created when you add a Logical Switch, on all the VTEPs (ESXi hosts) that are part of the same underlying Transport Zone.
    • A VM’s vNIC then connects to each Logic Switch as appropriate.


 VXLAN Network Preparation

  1. First step involved is to prepare the hosts. (Host Preparation)
    1. Launch vSphere web client, go to Netwrking & Security -> Installation (left) and click on Host preparation tab 1. Host prep 1
    2.  Select the Compute & Edge clusters where the NSX controllers & compute & edge VM’s reside where you need to enable VXLAN and click install. 1. Host prep 2
    3. While the installation is taking place, you can monitor the progress of vSphere web / c# client 1. Host prep 3
    4. During this host preparation, 3 VIB’s will be installed on the ESXi servers (as mentioned in the previous post of this series) & you can notice this in vSphere client.   1. Host prep 4
    5. Once complete, it will be shown as Ready under the installation status as follows 1. Host prep 5
  2. Configure the VXLAN networking
    1. Click configure within the same window as above (Host Preparation Tab) under VXLAN. This is where you configure the VTEP settings including the MTU size and if the VTEP connectivity is happening through a dedicated VLAN within the underlying transport network.  Presumably this is going to be common as you still have your normal physical network for all other things such as VMotion network (vlan X), storage network (vlan Y)…etc.
      1. In my example I’ve configured a dedicated VXLAN VLAN in my underlying switch with the MTU size set to 9000 (this could have been slightly more than 1600 for VXLAN to work).
      2. In a corporate / enterprise network, ensure that this vlan has the correct MTU size specified and also in any other vlans that the remote VTEP are tagged with. The communication between all VTEPs across vlans need to have at least MTU 1600 end to end.
      3. You also create an IP pool for the VTEP VMkernal port group to use on each ESXi host. Ensure that there’s sufficient capacity for all the ESXi hosts. 2. Co0nfigure VXLAN networking
    2. Once you click ok, you can see on vSphere client / web client the creation of the VTEP VMkernal portgroup with an IP assigned from the VTEP Pool defined along with the appropriate MTU size (Note that I’ve only set MTU 1600 in the previous step but it seems to have worked out that my underlying vDS and the physical network is set to MTU 9000 and used that here automatically). 3. VXLAN networking prep check
    3. Once complete, VXLAN will appear as complete under the host preparation tab as follows 4. VXLAN Enabled
    4. In Logical Network Preparation tab, you’ll notice that the VTEP vlan and the MTU size with all the VMkernal IP assignment for that VXLAN transport network as show below 5. VXLAN Transport enabled
  3. Create a segment ID (VNI pool) – Go to Segment ID and provide a range for a pool of VNI’s 6. VXLAN Segment ID pool (VNI)
  4. No go to the Transport Zone tab and create a Global transport zone.  7. VXLAN Global Transport Zone
  5. The logical switch is created using the section on the left and create a logical switch. Provide a name and the transport zone and select the Multicast / Unicast / Hybrid mode as appropriate (my example uses Unicast mode).
    1. Enable IP discovery: Enables the ARP suppression available within NSX. ARP traffic is generated as a broadcast in a network when the destination IP is known but not the MAC. However within NSX, the NSX controller maintains and ARP table which negates the use of ARP broadcast traffic.
    2. Enabled MAC learning: Useful if the VM’s have multiple MAC addresses or using vmnics with trunking. Enabling MAC Learning builds a VLAN/MAC pair learning table on each vNic. This table is stored as part of the dvfilter data. During vMotion, dvfilter saves and restores the table at the new location. The switch then issues RARPs for all the VLAN/MAC entries in the table.  8. Add logical Switch
  6. You can verify the creation of the associated port group by looking at the vSphere client. 10. Check the port group creaton
  7. In order to confirm that your VTEPs (behind the logical switches) are fully configured and can communicate with one another, you can double click the logical switch created, go to monitor and do a ping test using 2 ESXi servers. Note that the switch ports where the uplink NIC adaptors plugged in to need to be configured for appropriate MTU size (as shown)10.1 Verify the VTEP communication
  8. You can create multiple Logical Switches as required. Once created, select the logical switch and using Actions menu, select add VM to migrate a VM’s networking connectivity to the Logical switch.  11. Add VM step 1 12. Add VM step 2 13. Add VM step 3


There you have it. your NSX environment now has Logical switches that all your existing and new VM’s should be connecting to instead of standard or distributed switches.

As it stands now, these logical networks are somewhat unusable as they are isolated bubbles and the traffic cannot go outside of these networks. Following posts will look at introducing NSX routing using DLR – Distributed Logical Routers to route between different NXLAN networks (Logical switches) and introducing Layer 2 bridging to enable traffic within the Logical Switch network to communicate with the outside world.




Next: 6. NSX Distributed Logical Router ->

4. NSX Controller Architecture & Deployment

Next: 5. VXLAN & Logical Switches ->

In the previous step of this series of NSX posts, we looked at the NSX Manager and its deployment. In this article, we are going to have a quick look at the NSX Controller architecture at a high level and how to deploy them.

  • NSX Controller Architecture – Key points

    • They provide
      • VXLAN distribution & Distributed Logical Router (DLR) workload handling & providing information to ESXi hosts.
      • Workload distribution through slicing dynamically amongst all controllers
      • Removal of multicast
      • ARP broadcast traffic suppression in VXLAN networks
    • They store
      • ARP Table:          VM ARP requests for a MAC are intercepted by the hosts and sent to NSX controllers. If the NSX controllers has the ARP, it’s returned to the host that then replies to the VM locally resulting in no ARP broadcast.
      • VTEP table
      • MAC table
      • Routing table:    Routing tables are obtained from the DRL control VM
    • Cluster of 3 NSX controllers is always recommended to avoid a split brain scenario
    • 4 VCU & 4GB RAM per each controller
    • Should be deployed on the vCenter linked to NSX manager (meaning, on the compute or service & edge cluster, NOT the management cluster)
    • User interaction with NSX controllers is through CLI
    • Control plane communication is secured by SSL certificates
  • NSX Manager interaction with NSX Controller

    • NSX mgr and vCenter systems are linked 1:1
    • Install UWA, and few kernel modules (VXLAN, DLR VIB, DFW VIB) on the ESXi servers of the clusters managed by the linked vCenter server during the host preparation stage                                                                                                  
      • UWA=User World Agent
        • Run as a service daemon called netcpa (/etc/init.d/netcpad status)
        • Mediates between NSX controller and hypervisor kernel module communication except for DFW
        • Maintains logs at /var/log/netcpa.log on the ESXi host of the compute & edge clusters
      • Kernel modules
        • Distributed Firewall VIB: Communicate directly with NSX Manager through vsfwd service running on the host
        • Distributed Logical Router VIB: Communicate with NSX controllers through UWA
        • VXLAN VIB: Communicate with NSX controllers through UWA1.6. UWA


    • NSX Manager also configures the NSX controller nodes through the REST API                                                                1.3. Controller high level


    • For each NSX role (such as VXLAN, Logical routers….etc) a master controller is required
    • Uses slicing as a way to divide NSX controller workload in to different slices and allocate to each controller (controlled by the master) 1.5. Slicing


    • Highlighted below in the diagram are the typical communication channels between NSX controllers and other NSX components.0. NSX mgr communication


NSX Controller Deployment

Deploying the NSX controllers (3 recommended as stated above) is fairly straight forward

  1. Launch the vSphere Web client (for the compute or edge cluster, NOT the management cluster vCenter server) and select Networking and Security – note that you need to have logged in to vSphere web client as a NSX enterprise admin user (how to set up rights was covered in the previous post of this series)
  2. Select Installation from the left pane
  3. At the bottom, under NSX controller nodes section, select the plus sign to add the first NSX controller node and provide all the information requested in the next screen. Note the below
    1. Connected to: You need to select the management network port group here
    2. IP Pool:  Need an IP pool of at least 3 (for 3 NSX controllers)
    3. Password: NSX controller CLI password specified here. All subsequent controller nodes deployed will use the same password.   3. Add controller wizard 4. Add NSX-Controller-Pool
  4.  Once complete, click OK and you can see the first controller is being deployed                                            6. 1st NSC COntroller deployment
  5.  Once deployed, you can putty in to the CLI using the IP (first IP of the pool you specified above) and verify the control cluster status 6.1 Show control-cluster status
  6. Now, follow the same steps and deploy the 2nd and 3rd NSX Controller nodes too and verify the CLI access 7. 2nd & 3rd Controller node deployment8. Deploy all 3 controller nodes


That’s it, you now have your NSX controller clusters fully deployed and configured.

In the next post of the series, we will look at Logical switches and VXLAN overlays..

Next: VXLAN & Logical Switches ->




NetApp Integrated EVO:RAIL

NetApp has announced their version of the VMware EVO:RAIL offering – NetApp Integrated EVO:RAIL solution. So I thought I’d share with you some details if you are keen to find out a bit more.

First of all, VMware EVO:RAIL is one of the true hyper-converged infrastructure solutions available in the market today and I’d encourage you to read up a little more about it here first up if you are new to such hyper-converged solutions. A key element of this traditional VMware EVO:RAIL offering is that the underpinning storage is normally provided by VMware VSAN.  While there’s lot of good things and a great vibe in the industry about VSAN as a disruptive software defined storage technology with lots of potential, if you come from a traditional storage background where you understand the importance of specialist storage solutions (SAN) that’s built up their storage capabilities for years of work in the field (think EMC, NetApp, 3PAR, HDS), you may feel a little nervy about having to put your key application data on a relatively new storage technology like VSAN. So some of these storage vendors recognised this and added their storage tech to the same VMware EVO:RAIL offering, with a view to complement the  basic VMware EVO:RAIL offering. A list of those available can be found here (but please note that not all the vendors that appear here offer their own storage with VMware EVO:RAIL offering but simply the server hardware with VMware VSAN as the only storage option and its not very clear). NetApp integrated EVO:RAIL is NetApp’s version of this solution where, alongside VMware VSAN to storage temporary and less important data, a dedicated NetApp enterprise SAN cluster with all the NetApp innovation found within its Data ONTAP operating system is also made available to customers within this Evo:RAIL solution automatically. (EMC also announced something a little similar recently where they offer a VXPEX BLUE hyper converged appliance with VMware EVO:RAIL which you can read up about here. Until then, they only sold EVO:RAIL with just VMware VSAN rather than with a bundled EMC storage offering behind it so be careful if you are considering an Evo:RAIL offering from EMC).

Couple of background info points on the concept of hyper-converged infrastructures first,

  • Integrated / converged infrastructure market is and has been growing for many use cases of late. For example, FlexPod & VBLOCK have been massive successes and it is estimation is that 14,6% of the hardware market (server, storage & networking) is to be a part of an integrated infrastructure.
  • Hyper Converged infrastructure such as VMware Evo:RAIL is the next evolution of this naturally. Evo:RAIL can be classed as a true Hyper Converged solution compared to some other popular integration solutions (that uses a 3rd party hypervisor) such as Nutanix, Simplivity also often referred to as hyper-converged platforms.
  • It was estimated that the hyper-converged market was worth around $400-500 million for 2014
  • Amongst many use cases, Hyper Converged solutions are touted to be a good solution for the likes of branch offices…etc, where due to limited staff and infrastructure isolation requirements, simplicity of the solution setup and modular, self sufficient nature of the solution has been seen a good fit.
  • NetApp’s view seems to be that this (VMware EVO:RAIL) is very much a prescriptive solution that is not as scalable as a traditional infrastructure consisting of separate compute, storage & network nodes (i.e. FlexPod, VBLOCK) and its probably a view shared by the majority of the storage vendors.

Lets take a closer look at what the NetApp Integrated EVO:RAIL solution is and what its going to give you.

  • NetApp and VMware has had a long standing history of joint innovation together with more than 40,000 joint customers to date

1. History

  • NetApp Integrated EVO:RAIL provides a trusted storage platform vendor in to the existing VMware EVO:RAIL architecture and naturally only targeted at VMware customers.
  • Given below is the technical summary of the NetApp Integrated Evo:RAIL solution.
    • NetApp branded compute nodes (Co-branded with VMware)
      • Fixed server configuration similar to other competitive EVO:RAIL solutions.
      • 4 independent server nodes per NetApp server chassis
      • Dual Intel E5-2620v2 CPUs per server with 48 cores total per chassis
      • 192GB of RAM per server with 768GB of RAM total per chassis
      • Dual 10GbE NIC (optical or copper) SFP+ per server
      • NetApp fully provide all the server hardware support (actual OEM name is a secret) – This should not be too much of a concern to customers as a compute node is not massively different to their SAN controllers (both x86 systems) that they’ve been supporting for years.
    • NetApp Storage nodes
      • Comes with a NetApp FAS2552 high available SAN with Flash Pool (Flash pool is a way of NetApp using SSD disks in the shelves acting as a caching layer to optimize random reads and random overwrite workloads-typically seen in VDI, OLTP databases, Virtualisation. More info here.)
      • include Premium software bundle that include,
        • NetApp® Virtual Storage Console
        • NetApp NFS Plug-in for VMware VAAI
        • NetApp clustered Data ONTAP
        • NetApp Integration Software for VMware EVO:RAIL
        • NetApp FlexClone, SnapRestore, SnapMirror, SnapVault, Single Mailbox Recovery, SnapManager Suite
      • 12.6TB approximate NetApp usable capacity for enterprise data with SSD’s included for FlashPool (+6.5TB VSAN useable capacity)
      • Based on FAS2552 in a switchless cDOT cluster
      • Virtual SAN for vSphere infrastructure (as a base component to bring up the solution components up and running initially)
    • VMware Software Included
      • VMware EVO:RAIL software
      • VMware vCenter Server
      • VMware vSphere Enterprise Plus
      • VMware vRealize Log Insight
      • VMware Virtual SAN

Given below is the physical connectivity architecture of the NetApp integrated Evo:RAIL

2. Connectivity

  • The current offering has 2 types of storage:
    • VMware VSAN storage: Basic local server storage which is controlled by VSAN. Base application, SWAP space and temporary data can be placed here.
    • NetApp storage: Used for application deployment that require DR (NetApp SnapMirror…etc) and granular performance requirements (VST), Security and all traditional SAN requirements. For example, database servers like SQL, Oracle, and other applications like SAP, Sharepoint, Exchange as well as VDI that requires application integration for backup and recovery can have their data placed on the NetApp for the SnapManager application integration.
  • NetApp integrated Evo:RAIL also comes with the following benefit
    • NetApp Global Support providing,
      • Single contact for solution support
      • 3 years NetApp SupportEdge Premium Services for compute, storage, and NetApp and VMware software (note that NetApp specialise in this join support model already through the FlexPod support between NetApp, Cisco and VMware which they are presumably leveraging here)3 year hardware warranty (NetApp storage and server hardware)
      • Onsite Next Business Day and Same Day 4 hour parts replacement
  • Simple Deployment
    • Additional EVO:RAIL configuration engine integration software from NetApp (click and launch from the EVO:RAIL home page) is aimed to simplify the deployment of the NetApp storage as a part of the Evo:RAIL deployment.
    • Key points to note here are,
      • Simple setup and configuration & NetApp best practices automatically applied
      • Unified management across virtual and storage environment using vCenter Web Client with integrated NetApp Virtual Storage Console
      • Deep application integration: Exchange, SQL Server, SharePoint, Oracle and SAP
    • Overall deployment takes around 11 minutes approx. for the EVO:RAIL + about 5 mins for the NetApp SAN
    • A NetApp automation VM (called NTP-QEP) is deployed as a part of the initial deployment configuration automatically which acts as the glue between the EVO:RAIL management software and the NetApp hardware (I wonder if we can get this appliance with an API access so we can point this as a standalone NetApp?? That would be pretty awesome now wouldn’t it??)

4. Demo 1

    • The current prototype version of the integration software through this VM can be accessed when you login to the EVO:RAIL management console via the NetApp icon on the left and once launched, will take you to a simple data collection screen that asks for vCenter credentials, storage system pwd, management & data network details and the license details for the NetApp. Once they are provided and submitted, the automation engine will go ahead and configure the whole NetApp cDOT cluster including VSC VM deployed, cluster instantiated, node manage LIFS created, SP configured FP configured, SVM, FlexVol created & datastores are mounted to VMware for use based on NetApp best practise all automatically. Things like deduplication is also automatically enabled.
    • Since the NetApp Virtual Storage Console plugin is automatically installed, you can easily configure any additional NetApp configurations through that afterwards if you really wants.
  • Current planned use cases
    • Mainly aimed at branch offices as a solution
    • Also recommended as a point solutions aimed at achieving compliance and application integration such as database system deployments with built in backup and DR
    • Also positioned for VDI deployments (due to the built in flash option and the ease of deployment) with integrated backup and DR
  • Ordering & Availability
    • All components are available as a single product with 2 SKU’s, a product SKU and a support SKU. That’s it and include all NetApp and VMware software components in the SKU.
    • Targeted availability for ordering is somewhere around Q1/Q2 this year (2015)

Sounds like an interesting proposition from NetApp and I can see value, especially if you are an existing NetApp customer who knows and are used to all the handy tools available to you from the storage layer whos looking at VMware EVO:RAIL for a point solution or a branch office solution, this would be a simple no brainer.


Slide credit goes to NetApp..!



3. NSX manager deployment

Next: NSX Controller Architecture & Deployment ->

First deployement task involved in a typical NSX deployment is to deploy the NSX manager. The NSX Manager, which is the centralized management component of NSX and runs as a virtual appliance on an ESX host. This article aim summarise all the usual steps required to effectively plan & deploy this NSX manager appliance.

  1. Consider the pre-requisites.

    1. A dedicated management cluster:
      1. This is an important consideration. NSX manager needs to be deployed in a dedicated management / infrastructure cluster that is separate from the compute cluster (where all of your production VM’s live). The NSX installation and upgrade guide states this. ” VMware recommends that you install NSX Manager on a dedicated management cluster separate from the cluster(s) that NSX Manager manages. Each NSX Manager works with a single vCenter Server environment. The NSX Manager requires connectivity to the vCenter Server, ESXi host, and NSX Edge instances, NSX Guest Introspection module, and the NSX Data Security virtual machine. The NSX Manager should be run on an ESX host that is not affected by down time, such as frequent reboots or maintenance-mode operations. Thus, having available more than one ESX host for NSX Manager is recommended”
      2. The topic of a dedicated management cluster can be discussed in a post of its own. But for the sake of NSX deployment (and lot of other VMware products such as vRealize Automation…etc), a dedicated management cluster that is not dependent upon the compute cluster (where your production workload reside) is a must have requirement. You can refer to this as a management cluster or an infrastructure cluster. This has become almost a must have these days not just because of NSX but also some other VMware products such as vRealize Automation (vCAC) also recommending a dedicated management cluster. A typical example would look like below.                                                                                                                        0 NSX deployment architecture
      3. VMware would even go a step further, and recommend separating out another dedicated cluster as an edge cluster too, which could be especially important in a highly scaled out, large NSX deployment. But in all honesty, I cannot see this happening with the majority of VMware customers where they would likely make the edge cluster the same as their compute cluster. But, if you are talking about a large enough NSX deployment to warrant a dedicated edge cluster, a similar deployment architecture would look like below.0. Scaled out deployment
    2. Management system requirements: Given below are some key management requirements
      1. Supported Web browser (Internet explorer 8, 9 & 10 only, 2 most recent Mozilla Firefox or 2 most recent google Chrome)
      2. vSphere Web Client (All NSX settings are managed through the vSphere web client only as there’s no plugin for the c# client.
    3. vSphere requirements:  The compute cluster must have the following vSphere requirements
      1. Enterprise plus licenses (require the ability to use the VDS-vSphere Distributed Switch)
      2. vCenter Server (managing the compute & Edge clusters) to be vCenter 5.5 or later
      3. All ESXi servers in the compute & Edge clusters to be 5.5 or higher (if ESX 5.0, multicast has to be used for VXLAN)
      4. VMware tools to be installed
    4. Communication requirements:  The following ports are required to be available for communication between NSX manager and the NSX components
      1. 443 between the NSX manager and ESXi hosts & vCenter server (of the compute & edge cluster)
      2. 443 between the REST client and NSX server (a rest client would be something like a vRealize Orchestrator for example)
      3. TCP 80 and 443 to access the NSX manager between the management host and vCenter server & NSX manager
      4. TCP 22 for CLI troubleshooting between management host and NSX manager
  2. Understand the NSX deployment order: The following deployment and configuration order must be followed during a typical deploymentNSX configuration order

  3. Obtain the NSX manager OVA file (refer to the previous post in this series to find out how)

  4. Deploy the NSX manager OVA file (within the management cluster vCenter server)

    1. Select the location of the NSX manager OVA flie (either through the vSphere web client or the vSphere c# client)1
    2. Check the OVA version and click next                                                                  2
    3. Accept the EULA and click next                                                                                 3
    4. Provide a name for the NSX manager (don’t forget to manually add a DNS entry on your DNS server)  4
    5. Select the cluster on which to deploy the NSX manager (this should be the management cluster)    5
    6. Select any resource pools if required                                                                                                                    6
    7. Select the datastore to store the NSX manager OVF (should be specific to the management cluster)     7. Datastore
    8. Select the disk format (I would recommend Eager Zeroed Thick unless you have NSF as in the below screenshot)  8. Disk type
    9. Select the appropriate management network for the NSX manager (note the communication path with NSX manager) 9. Network Mapping
    10. Provide the CLI “admin” account password & the CLI privilege mode password for the NSX manager VM, networking properties such as Host name, IP details, DNS and NTP server details. Use redundant values here for high availability.10. Properties-1 10. Properties-2
    11. Select the power on after reboot check box and click finish                                                           11. Power On after deploy
  5. Perform the initial configuration of the NSX manager server

    1. Login to the NSX manager instance using https://NSX-Manager-Host-Name-Or-IP using a supported browser. The credentials use here are admin and the password you provided for the CLI admin account (during the ova deployment) 12. Login
    2. Click on View Summary                                                                                                                                                                   13. Manage settings
    3. Ensure that all NSX manager components are running and click on the manage tab at the top. Under the selected General section on the left, configure the NTP server settings (if not set automatically), syslog server details for log forwarding and any locale details if different from default.14. General settings
    4. Under the Network section, verify the general network settings are accurately set based on deployment parameters 15. Network tab
    5. Assign any specific SSL certifications required under SSL certification section. It is recommended that the default, self signed certificate is NOT used for production deployments 16. SSL Cert
    6. Select the backup & Restore on the left and click change under the FTP server settings to configure an FTP server  (FTP and SFTP supported) as a backup location 17. backup location
    7. Test the backup process by performing a SSL manager backup using the backup button 18. Backup verification
    8. Click on the NSX Management service under the COMPONENTS on the left and configure the lookup service. The lookup service configuration is optional but recommended. Look up server should be your SSO server & the default service port number is 7444. The default SSO administrator account (if using vCenter SSO) would be Administrator@vSphere.local on vCenter 5.5 or higher and admin@System-Domain if vCenter 5.0 or 5.1) 19. Look up service
    9. Now configure the vCenter service registration. Note that this vCenter server needs to be the vCenter server managing the computer & edge cluster, NOT the management cluster. You’d also need a vCenter administrative account to connect to the vCenter with and I would normally create a dedicated NSX service account on the Active Directory (or what ever your directory server system is) with administrative privileges within that vCenter. (keep a note of that service accounts credentials as you’ll need it in the step 11 below) 20. vCenter integration
    10. Ensure that both lookup service and the vCenter server registrations are successful with a green circle against the status of each. 21. Verification
    11. Now login to the vSphere web client (of the computer & edge cluster vCenter) using the NSX service account previously used to register NSX manager with the vCenter server. Note that at this point in time, that is the only account that has permission to see & configure the NSX manager instance within vCenter (note that we already allocated vCenter administrative rights to this account so you can login through the web client)22. vSphere web client  login
    12. Once logged in, click on Networking & Security on your left                                                                                                            23. vSphere web client home
    13. Now click on NSX managers on the left and then select the IP address of your NSX manager25. NSX managers
    14. Now click on Manage tab at the top and then Users tab. Verify that you only have 2 users here. The default admin user created during the appliance deployment and the domain account you specified during the integration of NSX manager with. Now  click the plus sign to add a user.  You can a vCenter user or a vCenter group. a vCenter user can be an active directory user. (provided that the active directory is configured within your vCenter SSO). Note that Active Directory groups don’t seem to work here as it needs to be an individual account. If your vCenter admin account would also need NSX administrative rights, please specify it here in the format of Domain\AccountName and click next.27. Add user
    15. Select the appropriate NSX role. You would need the enterprise administrator role assigned to at least one other account (unless you are going to use the service account credentials to configure NSX which is not recommended). So I’m giving a dedicated domain account the NSX enterprise administrator privileges here and I will use that account to login to the vSphere web client to configure NSX afterwards. That account also happened to have vCenter administrative rights too in order to be able to do deploy various NSX components. You can tie down privileges so that NSX Enterprise administrators and vCenter administrators can be separate accounts if you wish but the NSX admin account would need the following permissions within vCenter
      1. User permission to add and power on VMs within the computer cluster vCenter
      2. Permission to add files to the VM datastores within the computer cluster vCenter  28. Enterprise admin
    16. Now, when you log out the NSX service account, and log back in to the vSphere web client with the new account you’ve allocated NSX enterprise administraitive rights, you are able to see NSX manager instance and can configure all other NSX components.

Hope this was useful. In the next article of the series, we will look at how to configure basic NSX components such as VXLAN, logical switches…etc

Next: NSX Controller Architecture & Deployment ->



2. How to gain access to NSX install media

Next: NSX Manager Deployment ->

Ok, this is the 2nd post in the series of NSX. Its about how to gain access to NSX installation media (especially if you are part of average Jo Blogs community)  for you to try it out which seems to be not very clear to many (and wasn’t clearly documented by VMware until recently, in one place)

Now, when I first heard about VMware pushing NSX to customers, especially after the discontinuation of vCloud Networking and security, first thing I wanted to do was to get hold of the evaluation media for NSX along with the official documentation and try it out in my lab, having done the same with all other VMware products since vSphere 3.5 U2 release, initially as an ordinary customer and later, as a VMware partner. However to my surprise, when I logged in to my VMware account, I was not able to download the installation media / appliance as it said I was not entitled to download it. I work for a large VMware partner in the channel and I have almost unrestricted access to all VMware product media downloads together with NFR licenses that I can use for study, lab & demo purposes. So I was specially disappointed to see everyone taking about NSX and blogging about it being deployed in their labs…etc, yet as a large VMware partner, I (or anyone else in my company for that matter) didn’t have access to get the media. (see the screenshot below). I’d spoken to a large number of other VMware channel partners and their techies and some large VMware customers who’s got close relationships with VMware and everyone had the same issue. Just cannot seem to get hold of the download…!!

1. Cannot download

After some digging through the VMware channel team and partner alliance team, I found out that the NSX business unit within VMware are keeping a tight grip on the software to the degree that they would not want you to have access to download NSX unless you fit the following criteria.

  • An NSX customer who’s bought the license / NSX accelerator service through the VMware PSO (professional services) arm


So, the first option means, they would want you to buy / pair for it through a starter kit / accelerator pack, which in all honesty I wouldn’t want to as a customer, especially when I can download every other VMware product for free and evaluate for 60 days to decide whether its worth paying for it. So, Nah….! to that one

Second option means you need to have done the NSX ICM course. Now this too could be seen as an un-necessary expense to the average customer as the course isn’t cheap. I (well… my company to be precise) had to pay around £3,000.00 (in the UK) fro me to attend this course. Again, I wasn’t too thrilled about having to go down this route, especially since I am a SE at a VMware reseller & a solution provider who sell VMware products and they are handicapping my ability to learn the damn product before I could position it for the customers which didn’t make much sense. But as it turned out, I had to do the ICM course anyway as a starting step towards earning the VCP:NV certification (work in progress, amongst other things) and I finally gained access to the software in a legitimate way. When you finish the course, after about 2 weeks, what’s supposed to happen is that you are submitted to be approved to receive access to NSX media (whether you are a customer or a partner). Only if you are successful, you are then supposed to receive an email  from the Nicira team with a link to either create an account on the Nicira website or to reset the password to login to your Nicira portal (provided that the Nicira team has created you an account already after verifying that you’ve completed the course and approved you as a suitable candidate). The email I received looked like below.

2. Nicira Welcome email

As you can see, you are NOT allowed to share this account with anyone else without permission from the POC team within Nicira business unit.

Once you got your password reset, you can login to the https://apps.nicira.com/ url with your credentials and you’ll finally have access to download the media from their (note that you may still NOT have access to download the media from the generic My VMware portal where you download all other VMware software from, unless probably if you’ve actually bought it)

3. Nicira login

Once you login, you’ll have access to NSX download. In my case I have access to both the NSX-V as well as NSX-MH versions but unsure what you’d be allowed access to.

4. NSX-V download



You also get access to a evaluation license key (under entitlements) which appears to be valid a lot longer than the standard 60 day evaluation period.

So, as far as I’m aware, these are the ONLY 2 ways available to you as a partner or as a customer, to gain access to the download to play with it your self. And I have spoken to lot of people including specialist engineers within the NSX BU within VMware as well as the director of the VMware networking and security division in emea and they’ve all confirmed this to be the case. So, the bottom line is, if you need access to it, its gonna cost you one way or another…!

Now, if you are happy not to have access to the install bits but simply wants to play with it, there’s a 3rd option available to you, and that’s called hands on labs. VMware hands on labs are free and anyone can sign up with an account to access various hands on labs. I’ve tried HOLS out and they are pretty awesome. And there are a number of  different hands on labs you can take, that involve NSX. Warning….! These labs are quite lengthy and usually are around 4-5 hours long each.

  • HOL-SDC-1403 : VMware NSX Introduction – This is probably best beginners course to take first up. The course contains the followings
    • Component Overview & Terminology
    • Logical Switching
    • Logical Routing
    • Distributed Firewall
    • Edge Services Gateway
  • HOL-SDC-1425: VMware NSX Advanced – next step up from the above, Include DHCP relay, scale out L3, L2VPN, Trend micro integration and Riverbed integration lab work.
  • HOL-SDC-1424 – VMware NSX in the SDDC – This NSX lab covers integration of NSX with components of the vCloud Suite to deliver on the Software Defined Datacenter (primarily the integration with vRA). This lab is awesome and include the following contents
    • Create Network Profiles
    • Create a Multi-Machine Blueprint
    • Configure a Catalog Item and Deploy
    • vCenter Orchestrator and the NSX API through vCloud Automation Center Advanced Designer
    • Using vCenter Operations with the NSX Management Pack
    • Using vCenter Log Insight with NS
  • HOL-SDC-1419 – VMware NSX for Multi-Hypervisor Environments. This lab appears to be completely based on NSX & Linux KVM

The hands on labs catalog is available here and the access to labs themselves is available here.

Aside from those dedicated labs, the following hands on labs (as of the time of writing) are also available that involve NSX in some form or another.

  • HOL-PRT-1464 – Symantec Data Center Security: Server – Secure your SDDC – Symantec Data Center Security: Server leverages NSX Service Composer and Security Groups to orchestrate and provision security policies for your virtual workloads. Provide agent-less malware protection and guest network threat protection with automated workflows.
  • HOL-SDC-1413 – IT Outcomes – App and Infrastructure Delivery Automation -Reduce time to deliver applications and infrastructure with automated provisioning and policy-based governance throughout the service delivery lifecycle using vRealize Automation and Application Services. Integration points with VMware’s NSX for vSphere will be shown, as well as external service integration (such as vCloud Air, IP, and service management), and extensibility through additional automation
  • HOL-SDC-1415 – IT Outcomes – Security Controls Native to Infrastructure – Learn how several VMware technologies work together to implement policy-based network control, configuration and compliance management, and intelligent operations management. You will use NSX for vSphere to isolate, protect, and apply security policies across virtual network workloads. Use vCenter Configuration Manager to continuously identify, assess, and remediate out-of-compliance virtual machines. Finally, you will use vCenter Operations Manager for operational insight into the health, risk, and efficiency of the virtual infrastructure
  • HOL-SDC-1420 – OpenStack with VMware vSphere and NSX – Are you interested in learning more about OpenStack?  OpenStack is a cloud API framework that enables self-service cloud provisioning and automation.  You will take a basic tour of OpenStack and use it with vSphere and NSX to provision compute, storage and networking resources
  • HOL-SDC-1412 – IT Outcomes – Data Center Virtualization and Standardization – This lab will focus on taking the traditional benefits of vSphere and extending it further into your Software-Defined Data Center (SDDC) through Software-Defined Storage using Virtual SAN and network virtualization using NSX for vSphere. This will enable organizations to see how to deliver the same efficiency and agility for the datacentre as it does right now for the VM
  • HOL-PRT-1462 – Palo Alto Networks – Virtualized Data Center Security – Configure the Palo Alto Networks virtualized next-generation firewall VM-1000-HV with VMware NSX to protect VM to VM communications from today’s advanced threats


These hands on labs are a great way to play with NSX and its related products such as vRA, vCO, Palo Alto network firewall integration…etc, but if you would like to do it in your own time, at your own phase, in your own lab (which most of us IT geeks would given the chance), these labs may not be that  an alternative to having access to the software.

Hope this was useful and clarifies any questions you may have had about how to gain access to NSX media to start working / playing with it.

Comments would be helpful & appreciated.

Next: NSX Manager Deployment ->



VMware New product Annoucments

So, some of you may have heard, VMware made a big announcement today (well, yesterday in US time) about a number of new product / upgrade launches. Given below is a summary of what was announced. I will post more detailed articles about each topic over the coming weeks, as and when I managed to plough through the marketing layers and gotten to the real technical details for each offering.

  • VMware OneCloud:-  This seems to be a new spin to what was previously called Hybrid Cloud (combination of on premise vSphere private cloud + the vCloud Air managed networking services + vCloud Air public cloud platform service working as one)

1. OneCloud


  • VMware Integrated OpenStack (VIO):- Free for the VMware enterprise plus customers which provides a VMware integrated OpenStack distribution. This is a good one…!! For those who are new to OpenStack (including myself), OpenStack is an open source, cloud operating system that allows you to control large pools of compute (managed by the “Nova” module and is compatible with KVM, VMware, XEN, Docker…etc), storage (block storage managed by the “Cinder” module)  and networking (project Neutron) recourses in a datacentre. To put it differently, OpenStack is a collection of open source software, bundled together under a single framework, that let you manage number of hypervisors (KVM, XEN, vSphere) to provide a cloud computing platform. With whats announced today, existing OpenStack solutions can connect to vSphere via the installation of VIO software. VMware support is also on for VIO now. OpenStack management packs also available for vCops.

2. VMware integrated openstack3. Openstack commitment

14. OpenStack details


  • vSphere 6.0:-  Finally…!! Over 650 new features added to vSphere platform. Some key ones are vVols, big data for Hadoop support, 64 hosts per cluster (2X), 4X the VM’s per host supported, cross vCenter VMotion, long distance VMotion, Fault Tolerance for multi processors – up to 4 vCPUs (yeah boy..!!) & vCenter enhancements to support bigger environments. I will be aiming to produce a separate, dedicated article  to cover this release. Also announced were vCloud suite 6 & vCenter Operations suite 6.0. I believe there’s also added graphic supports available in vSphere 6.0 for VDI deployments (especially when deployed with the likes of NVDIA GRID cards with vsga support)

7. vSphere enahancements8. vCenter enhancements

11. cross VC vmotion12. Long distance vmotion


  • VMware vSAN 6:- I’m not a big fan of vSAN personally yet. but if you are, you may find this interesting. Again, will produce a more detailed post in time about new features. Rack awareness support through a concept of fault domains seems interesting (provided that you power you racks from different, redundant power feeds. Otherwise this is practically of not much use). Virtual SAN Snapshot Manager fling is available through the fling site (though I haven’t been able to find this yet within the flings site)

4. vSAN9. vSAN enhancements


  • vSphere Virtual Volumes:-  Storage policy based management. Probably the single biggest addition to vSphere platform in my view. NetApp was a reference case partner for vVols since way back and provide vVol support with Clustered Data OnTAP (cDOT) for VMware from day 1 and of course there are many other storage vendors announcing support too.

6. vVol architecture5. vVol ecosystem support


  • vCloud Air Enhanced Disaster Recovery:- vCloud Air for DR was already available as an offering before. 15 Min minimum RPO is still the same as still based on the vSphere replication adaptor. But some new additions include easier fail back (from vCloud Air back to on-prem) now, and easier offline seeding capability (for the initial synch using offline disks),  up to 24 recovery snapshots on vCloud Air (DR), vRO (Orchestrator) integration with a new plugin are all the details I have on new additions at this stage.

13. Vcloud Air DR


  • vCloud Air Advanced Networking Services:- If you run VMware NSX in your on premise environment, you can now extend that network to VMware’s public cloud platform, vCloud Air public cloud. (this used to be somewhat possible with vshield edge products earlier, but todays announcement now allows NSX customers to do this too by the looks of it). Think stretched VXLAN between on premise and cloud. Some the features offered here seems similar to some of the features what Cisco Intercloud offers. In addition, inherent NSX features such as Micro segmentation (distributed firewall), dynamic routing & up to 200 virtual interfaces (NSX edge interfaces behind the scene) per vDC within vCloud Air are now available (I believe NSX is rolled out in the EMEA version of vCloud Air platform based in UK datacenters so these features are now fully supported)

10. vCloud Air + NSX


Great set of announcements…. vSphere 6 and vVOLs are the biggest ones I like most….. Time to start dipping in to the technical nitty gritties of everything now. 🙂

More details can be found on onecloud.VMware.com

Slide credit goes to VMware..!!



1. Brief Introduction to NSX

Next: How to gain access to NSX media ->

NSX is the next evolution of what used to be known as vCloud Networking and Security suite within the VMware’s vCloud suite – A.K.A vCNS (now discontinued) which in tern, was an evolution of the Nicira business VMware acquired a while back. NSX is how VMware provides the SDN (Software Defined networking) capability to the Software Defined Data Center (SDDC). However some may argue that NSX primarily provide a NFV (Network Function Virtualisation) function which is slightly different to that of SDN.

The current version of NSX available comes in 2 forms

  1. NSX-V : NSX for vSphere – This is the most popular version of NSX and is what appears to be the future of NSX. NSX-V is inteneded to be used by all existing and future vSphere users alongside their vSphere (vCenter and ESXi) environment. All the contents of the rest of this post and all my future posts within this blog are referring to this version of NSX and NOT the multi hypervisor version.
  2. NSX-MH : NSX for multi hypervisors is a special version of NSX that is compatible with other hypervisors outside of just vSphere. Though it suggests multi- hypervisors in the name, actual support (as of the time of writing) is limited and is primarily aimed at offering networking and security to OpenStack (Linux KVM) rather than all other hypervisors (currently supported hypervisors are XEN, KVM & ESXi). Also, the rumour is that VMware are phasing NSX-MH out anyway which means all if not most future development and integration efforts would likely be focused around NSX-V. However if you are interested in NSX-MH, refer to the NSX-MH design guide (based on the version 4.2 at the time of  writing) which seems pretty good.

Given below is a high level overview of the architectural differences between the 2 offerings.

1. Differnces between V & MH


NSX-V, or as commonly referred to as NSX, provide a number of features to a typical vSphere based datacentre

2. NSX features

NSX doesn’t do any physical packet forwarding and as such, doesn’t add anything to the physical switching environment. It only exist in the ESXi environment and independent (theoretically speaking) of the underlying network hardware. (Note that NSX however is reliant on a properly designed network in a spine and leaf architecture and require support for MTU > 1600 within the underlying physical network).

  • NSX virtualises Logical Switching:- This is a key feature that enables the creation of a VXLAN overlay network with layer 2 adjacency over an existing, legacy layer 3 IP network. As shown in the diagram below, a layer 2 connectivity between 2 VM’s on the same host never leaves the hypervisor and the end to end communication all takes place in the silicon.  Communication between VM’s in different hosts still has to traverse the underlying network fabric however, compared to before (without NSX), the packet switching is now done within the NSX switch (known as the Logical switch). This logical switch is a dvPort group type of construct added to an existing VMware distributed vSwitch during the installation of NSX

3. Logical Switching

  • NSX virtualises logical routing:- NSX provides the capability to deploy a logical router which can route traffic between different layer 3 subnets without having to physical be routed using a physical router. The diagram below shows how NSX virtualise the layer 3 connectivity in different IP subnets and logical switches without leaving the hypervisor to use a physical router. Thanks to this, routing between 2 VMs in 2 different layer 3 subnets in the same host would no longer require the traffic to be routed by an external, physical router but instead, routed within the same host using the NSX software router allowing the entire transaction to all occur in the silicon. In the past, a VM1 on a port group tagged with vlan 101 on host A, talking to VM2 on a port group tagged with vlan 102 on the same host would have required the packet to be routed using an external router (or a switch with Layer 3 license) that both uplinks / vlans connects to. With NSX, this is no longer required and all routing, weather VM to VM communication in the same host or between different hosts will all be routed using the software router.

4. Logical Routing


  • NSX REST API:-  The built in REST API provide the programmatically access to NSX by external orchestration systems such as VMware vRealize Automation (vCAC). This programmatically access provide the ability to automate the deployment of networking configurations, that can now be tied to application configurations, all being deployed automatically on to the datacentre.

5. Programmatical access

  • NSX Logical Firewall:-  The NSX logical firewall introduces a brand new concept of micro segmentation where, put simply, through the use of a ESXi kernel module driver, un-permitted traffic are blocked at the VM’s vmnic driver level so that the packets are never released in to the virtual network. No other SDN / NFV solution in the market as of now is able to provide this level of micro segmentation (though Cisco ACI is rumoured to bring this capability to ACI platform through the use of the Appliance Virtual Switch).  The NSX logical firewall provide the East-West traffic filtering through the distributed firewall while North-South filtering is provide through the NSX Edge services gateway. The Distributed firewall also allows the capability to integrate with advanced 3rd party layer 4-7 firewalls such as Palo-Alto network firewalls.

6. Firewalls

There are many other benefits of NSX all of which cannot be discussed within the scope of this article. However the above should provide you with a  reasonable insight in to some of the most notable and most discussed benefits of NSX.

Next: How to gain access to NSX media ->