4. NSX Controller Architecture & Deployment

Next: 5. VXLAN & Logical Switches ->

In the previous step of this series of NSX posts, we looked at the NSX Manager and its deployment. In this article, we are going to have a quick look at the NSX Controller architecture at a high level and how to deploy them.

  • NSX Controller Architecture – Key points

    • They provide
      • VXLAN distribution & Distributed Logical Router (DLR) workload handling & providing information to ESXi hosts.
      • Workload distribution through slicing dynamically amongst all controllers
      • Removal of multicast
      • ARP broadcast traffic suppression in VXLAN networks
    • They store
      • ARP Table:          VM ARP requests for a MAC are intercepted by the hosts and sent to NSX controllers. If the NSX controllers has the ARP, it’s returned to the host that then replies to the VM locally resulting in no ARP broadcast.
      • VTEP table
      • MAC table
      • Routing table:    Routing tables are obtained from the DRL control VM
    • Cluster of 3 NSX controllers is always recommended to avoid a split brain scenario
    • 4 VCU & 4GB RAM per each controller
    • Should be deployed on the vCenter linked to NSX manager (meaning, on the compute or service & edge cluster, NOT the management cluster)
    • User interaction with NSX controllers is through CLI
    • Control plane communication is secured by SSL certificates
  • NSX Manager interaction with NSX Controller

    • NSX mgr and vCenter systems are linked 1:1
    • Install UWA, and few kernel modules (VXLAN, DLR VIB, DFW VIB) on the ESXi servers of the clusters managed by the linked vCenter server during the host preparation stage                                                                                                  
      • UWA=User World Agent
        • Run as a service daemon called netcpa (/etc/init.d/netcpad status)
        • Mediates between NSX controller and hypervisor kernel module communication except for DFW
        • Maintains logs at /var/log/netcpa.log on the ESXi host of the compute & edge clusters
      • Kernel modules
        • Distributed Firewall VIB: Communicate directly with NSX Manager through vsfwd service running on the host
        • Distributed Logical Router VIB: Communicate with NSX controllers through UWA
        • VXLAN VIB: Communicate with NSX controllers through UWA1.6. UWA

     

    • NSX Manager also configures the NSX controller nodes through the REST API                                                                1.3. Controller high level

     

    • For each NSX role (such as VXLAN, Logical routers….etc) a master controller is required
    • Uses slicing as a way to divide NSX controller workload in to different slices and allocate to each controller (controlled by the master) 1.5. Slicing

     

    • Highlighted below in the diagram are the typical communication channels between NSX controllers and other NSX components.0. NSX mgr communication

 

NSX Controller Deployment

Deploying the NSX controllers (3 recommended as stated above) is fairly straight forward

  1. Launch the vSphere Web client (for the compute or edge cluster, NOT the management cluster vCenter server) and select Networking and Security – note that you need to have logged in to vSphere web client as a NSX enterprise admin user (how to set up rights was covered in the previous post of this series)
  2. Select Installation from the left pane
  3. At the bottom, under NSX controller nodes section, select the plus sign to add the first NSX controller node and provide all the information requested in the next screen. Note the below
    1. Connected to: You need to select the management network port group here
    2. IP Pool:  Need an IP pool of at least 3 (for 3 NSX controllers)
    3. Password: NSX controller CLI password specified here. All subsequent controller nodes deployed will use the same password.   3. Add controller wizard 4. Add NSX-Controller-Pool
  4.  Once complete, click OK and you can see the first controller is being deployed                                            6. 1st NSC COntroller deployment
  5.  Once deployed, you can putty in to the CLI using the IP (first IP of the pool you specified above) and verify the control cluster status 6.1 Show control-cluster status
  6. Now, follow the same steps and deploy the 2nd and 3rd NSX Controller nodes too and verify the CLI access 7. 2nd & 3rd Controller node deployment8. Deploy all 3 controller nodes

 

That’s it, you now have your NSX controller clusters fully deployed and configured.

In the next post of the series, we will look at Logical switches and VXLAN overlays..

Next: VXLAN & Logical Switches ->

Cheers

Chan

 

NetApp Integrated EVO:RAIL

NetApp has announced their version of the VMware EVO:RAIL offering – NetApp Integrated EVO:RAIL solution. So I thought I’d share with you some details if you are keen to find out a bit more.

First of all, VMware EVO:RAIL is one of the true hyper-converged infrastructure solutions available in the market today and I’d encourage you to read up a little more about it here first up if you are new to such hyper-converged solutions. A key element of this traditional VMware EVO:RAIL offering is that the underpinning storage is normally provided by VMware VSAN.  While there’s lot of good things and a great vibe in the industry about VSAN as a disruptive software defined storage technology with lots of potential, if you come from a traditional storage background where you understand the importance of specialist storage solutions (SAN) that’s built up their storage capabilities for years of work in the field (think EMC, NetApp, 3PAR, HDS), you may feel a little nervy about having to put your key application data on a relatively new storage technology like VSAN. So some of these storage vendors recognised this and added their storage tech to the same VMware EVO:RAIL offering, with a view to complement the  basic VMware EVO:RAIL offering. A list of those available can be found here (but please note that not all the vendors that appear here offer their own storage with VMware EVO:RAIL offering but simply the server hardware with VMware VSAN as the only storage option and its not very clear). NetApp integrated EVO:RAIL is NetApp’s version of this solution where, alongside VMware VSAN to storage temporary and less important data, a dedicated NetApp enterprise SAN cluster with all the NetApp innovation found within its Data ONTAP operating system is also made available to customers within this Evo:RAIL solution automatically. (EMC also announced something a little similar recently where they offer a VXPEX BLUE hyper converged appliance with VMware EVO:RAIL which you can read up about here. Until then, they only sold EVO:RAIL with just VMware VSAN rather than with a bundled EMC storage offering behind it so be careful if you are considering an Evo:RAIL offering from EMC).

Couple of background info points on the concept of hyper-converged infrastructures first,

  • Integrated / converged infrastructure market is and has been growing for many use cases of late. For example, FlexPod & VBLOCK have been massive successes and it is estimation is that 14,6% of the hardware market (server, storage & networking) is to be a part of an integrated infrastructure.
  • Hyper Converged infrastructure such as VMware Evo:RAIL is the next evolution of this naturally. Evo:RAIL can be classed as a true Hyper Converged solution compared to some other popular integration solutions (that uses a 3rd party hypervisor) such as Nutanix, Simplivity also often referred to as hyper-converged platforms.
  • It was estimated that the hyper-converged market was worth around $400-500 million for 2014
  • Amongst many use cases, Hyper Converged solutions are touted to be a good solution for the likes of branch offices…etc, where due to limited staff and infrastructure isolation requirements, simplicity of the solution setup and modular, self sufficient nature of the solution has been seen a good fit.
  • NetApp’s view seems to be that this (VMware EVO:RAIL) is very much a prescriptive solution that is not as scalable as a traditional infrastructure consisting of separate compute, storage & network nodes (i.e. FlexPod, VBLOCK) and its probably a view shared by the majority of the storage vendors.

Lets take a closer look at what the NetApp Integrated EVO:RAIL solution is and what its going to give you.

  • NetApp and VMware has had a long standing history of joint innovation together with more than 40,000 joint customers to date

1. History

  • NetApp Integrated EVO:RAIL provides a trusted storage platform vendor in to the existing VMware EVO:RAIL architecture and naturally only targeted at VMware customers.
  • Given below is the technical summary of the NetApp Integrated Evo:RAIL solution.
    • NetApp branded compute nodes (Co-branded with VMware)
      • Fixed server configuration similar to other competitive EVO:RAIL solutions.
      • 4 independent server nodes per NetApp server chassis
      • Dual Intel E5-2620v2 CPUs per server with 48 cores total per chassis
      • 192GB of RAM per server with 768GB of RAM total per chassis
      • Dual 10GbE NIC (optical or copper) SFP+ per server
      • NetApp fully provide all the server hardware support (actual OEM name is a secret) – This should not be too much of a concern to customers as a compute node is not massively different to their SAN controllers (both x86 systems) that they’ve been supporting for years.
    • NetApp Storage nodes
      • Comes with a NetApp FAS2552 high available SAN with Flash Pool (Flash pool is a way of NetApp using SSD disks in the shelves acting as a caching layer to optimize random reads and random overwrite workloads-typically seen in VDI, OLTP databases, Virtualisation. More info here.)
      • include Premium software bundle that include,
        • NetApp® Virtual Storage Console
        • NetApp NFS Plug-in for VMware VAAI
        • NetApp clustered Data ONTAP
        • NetApp Integration Software for VMware EVO:RAIL
        • NetApp FlexClone, SnapRestore, SnapMirror, SnapVault, Single Mailbox Recovery, SnapManager Suite
      • 12.6TB approximate NetApp usable capacity for enterprise data with SSD’s included for FlashPool (+6.5TB VSAN useable capacity)
      • Based on FAS2552 in a switchless cDOT cluster
      • Virtual SAN for vSphere infrastructure (as a base component to bring up the solution components up and running initially)
    • VMware Software Included
      • VMware EVO:RAIL software
      • VMware vCenter Server
      • VMware vSphere Enterprise Plus
      • VMware vRealize Log Insight
      • VMware Virtual SAN

Given below is the physical connectivity architecture of the NetApp integrated Evo:RAIL

2. Connectivity

  • The current offering has 2 types of storage:
    • VMware VSAN storage: Basic local server storage which is controlled by VSAN. Base application, SWAP space and temporary data can be placed here.
    • NetApp storage: Used for application deployment that require DR (NetApp SnapMirror…etc) and granular performance requirements (VST), Security and all traditional SAN requirements. For example, database servers like SQL, Oracle, and other applications like SAP, Sharepoint, Exchange as well as VDI that requires application integration for backup and recovery can have their data placed on the NetApp for the SnapManager application integration.
  • NetApp integrated Evo:RAIL also comes with the following benefit
    • NetApp Global Support providing,
      • Single contact for solution support
      • 3 years NetApp SupportEdge Premium Services for compute, storage, and NetApp and VMware software (note that NetApp specialise in this join support model already through the FlexPod support between NetApp, Cisco and VMware which they are presumably leveraging here)3 year hardware warranty (NetApp storage and server hardware)
      • Onsite Next Business Day and Same Day 4 hour parts replacement
  • Simple Deployment
    • Additional EVO:RAIL configuration engine integration software from NetApp (click and launch from the EVO:RAIL home page) is aimed to simplify the deployment of the NetApp storage as a part of the Evo:RAIL deployment.
    • Key points to note here are,
      • Simple setup and configuration & NetApp best practices automatically applied
      • Unified management across virtual and storage environment using vCenter Web Client with integrated NetApp Virtual Storage Console
      • Deep application integration: Exchange, SQL Server, SharePoint, Oracle and SAP
    • Overall deployment takes around 11 minutes approx. for the EVO:RAIL + about 5 mins for the NetApp SAN
    • A NetApp automation VM (called NTP-QEP) is deployed as a part of the initial deployment configuration automatically which acts as the glue between the EVO:RAIL management software and the NetApp hardware (I wonder if we can get this appliance with an API access so we can point this as a standalone NetApp?? That would be pretty awesome now wouldn’t it??)

4. Demo 1

    • The current prototype version of the integration software through this VM can be accessed when you login to the EVO:RAIL management console via the NetApp icon on the left and once launched, will take you to a simple data collection screen that asks for vCenter credentials, storage system pwd, management & data network details and the license details for the NetApp. Once they are provided and submitted, the automation engine will go ahead and configure the whole NetApp cDOT cluster including VSC VM deployed, cluster instantiated, node manage LIFS created, SP configured FP configured, SVM, FlexVol created & datastores are mounted to VMware for use based on NetApp best practise all automatically. Things like deduplication is also automatically enabled.
    • Since the NetApp Virtual Storage Console plugin is automatically installed, you can easily configure any additional NetApp configurations through that afterwards if you really wants.
  • Current planned use cases
    • Mainly aimed at branch offices as a solution
    • Also recommended as a point solutions aimed at achieving compliance and application integration such as database system deployments with built in backup and DR
    • Also positioned for VDI deployments (due to the built in flash option and the ease of deployment) with integrated backup and DR
  • Ordering & Availability
    • All components are available as a single product with 2 SKU’s, a product SKU and a support SKU. That’s it and include all NetApp and VMware software components in the SKU.
    • Targeted availability for ordering is somewhere around Q1/Q2 this year (2015)

Sounds like an interesting proposition from NetApp and I can see value, especially if you are an existing NetApp customer who knows and are used to all the handy tools available to you from the storage layer whos looking at VMware EVO:RAIL for a point solution or a branch office solution, this would be a simple no brainer.

Cheers

Slide credit goes to NetApp..!

Chan