New VMware Product Availabilities – Now available to download

Update

VMware have just made a number of new product versions (mostly maintenance releases on few different products, including that of the much hyped VSAN 6.2) so a quick post to summarise the content that was released last night (15.03.2016)

  • VMware VSAN 6.2 – VMware VSAN 6.2 was officially announced in early February with a number of cool new features such as Erasure coding but unless you were a techie trying to download the software, you may have not known that it was not available for download despite being announced. That was until yesterday and the product is now available to download for every customer.

 

  • VMware vRealize Automation 7.0.1 now released and available for download
    • Release notes here
    • Product binaries here
    • Documentation here

 

  • VMware vRealize Orchestrator 7.0.1 is released and available to download
    • Release notes here
    • Product binaries here
    • Documentation here

 

  • vRealize Business for Cloud (Old ITBMS offering) is also released and available for grabs now
    • Release notes here
    • Product binaries here
    • Documentation here

 

  • vRealize Log Insight 3.3.1 is released and available to download
    • Release notes here
    • Product binaries here
    • Documentation here

 

  • vCloud Suite 7.0 is also released and available to download (here) – This includes all of the above new versions of products plus the exiting versions for vSphere Replication 6.1 + vSphere Data Protection 6.1.2 + vROPS 6.2.0a + vRealize Infrastructure Navigator 5.8.5

VMware All Flash VSAN Implementation (Home Lab)

I’ve been waiting for a while to be able to implement an all flash VSAN in my lab and now that VSAN 6.2 has been announced, I thought it would be time to upgrade my capacity disks from HDD’s to SSD’s and get cracking..! (note: despite the announcement, VSAN 6.2 binaries are NOT YET available to download. I’m hearing it would be available in a week or two on My VMware though so until then, mine is based on VSAN 6.1 – ESXi 6.0U1 binaries)

As I already had a normal (Hybrid) VSAN implementation using SSD+HDD in my management vSphere cluster, the plan was to keep the existing SSD’s as caching tier and replace the current HDD’s with high capacity SSD drives. So I bought 3 new Samsung 850 EVO 256GB drives from Amazon (here)                                       Capture

All Flash VSAN Setup

Given below are the typical steps involved in the processes to implement All Flash VSAN within the VMware cluster (I’m using the 3 node management cluster within my lab for the illustration below)

  1. Install the SSD drives in the server – This should be easy enough. If you are doing this in a production environment, you need to ensure that the capacity SSD’s (similar to all other components in your VSAN ready nodes)  are in the VMware HCL
  2. Enable VSAN on the cluster – Need to be done on the web client      1 - Enable VSAN
  3. Verify the new SSDs are available & recognised within the web client – All SSD’s are recognised as caching disks by default.              0 - Default disk assignment  2 - all caching
  4. Manually tag the required SSD drives as capacity disks VIA COMMANDLINE for them to be recognised as capacity disks within VSAN configuration – This step MUST be carried out using one the ways explained below and until then, SSD disks WILL NOT be available to be used as capacity disks within an all flash VSAN otherwise. (There currently is no GUI option on the web client to achieve this and cli must be used)
    1. Use esxcli command on each ESXi server
      1. SSH in to the ESXi server shell
      2. Use the vdq -q command to get the T10 SCSI name for the capacity SSD drive (Also verify “IsCapacityFlash” option is set to 0) 3 SSH
      3. Use the “esxcli vsan storage tag add -d <SCSI T10 name of the disk> -t capacityFlash” command to mark the disk as capacity SSD.   4 ESXCLI
      4. Use the vdq -q command to query the disk status and ensure the disk is now marked as “1” for “IsCapacityFlash” 5 esxcli verify
      5. If you now look at the Web client UI, the capacity SSD disk will now have been correctly identified as capacity (note the drive type changed to HDD which is somewhat misleading as the drive type is still SSD) 8.1 GUI
    2. Use the “VMware Virtual SAN All-Flash Configuration Utility” software – This is a 3rd party tool and not an officially supported VMware tool but if you do not want to manually SSH in to the ESXi servers 1 by 1, this software could be quite handy as you can bulk tag on many ESXi servers all at the same time. I’ve used this tool to tag the SSD’s in the next 2 servers of my lab in the illustration below. xx - Use VMware Virtual SAN all-flash configuration utility
  5. Verify capacity SSD across all hosts – Now that all the capacity SSD’s have been tagged as capacity disks, verify that the web client sees all capacity SSD’s across all hosts                                                9 Disk group manual
  6. Create the disk groups on each host – I’m opting to create this manually as shown below 9 Disk group manual 10 - Verify disk groups
  7. Verify the VSAN datastore now being available and accessible 11 - VSAN datastore active

There you have it. Implementing all flash VSAN requires manually tagging the SSDs as capacity SSDs for the time being and this is how you do it. I may also add that since the all flash VSAN, my storage performance has gone through the roof in my home lab which is great too. However this is all done on Whitebox hardware and not all of them are fully on VMware HCL….etc which makes those performance figures far from optimal. It would be really good to see performance statistics if you have deployed all flash VSAN in your production environment.

Cheers

Chan

 

 

Cisco HyperFlex – New Hyper-Converged Offering from Cisco

 

1

Cisco has just announced their newest datacentre infrastructure solution code named HyperFlex. This is a fully integrated Cisco proprietary Hybrid Hyper-Converged solution offering (similar to Nutanix / Simplivity / VMware VSAN) that consist of the followings

  • Cisco UCS C series rack mount servers (C220 or C240) with local storage (SSD+HDD)
  • Software Defined Storage virtual appliance (VSA)

HCI market is quite busy at the moment and there’s lot of demand as it was only natural that Cisco would join the party with their own offering to compete with the incumbents such as Nutanix, Simplivity and VMware’s software defined HCI solution based on VSAN.

While the UCS C series servers are nothing new and are the same familiar rack mount servers, the real introduction here is the SDS solution which effectively comes from their strategic partnership with Springpath and is what is worth considering. (For those who aren’t familiar, Springpath was supposedly founded by ex VMware engineers so naturally you’ll find many similarities between this SDS solution and VMware VSAN)

Now I’m not intending to cover the HyperFlex offering in depth here within this article but will highlight few key points and what I think of the solution and compare it to some competition.

Hardware

As mentioned above, the hardware consist of 2 configuration choices on day 1 as follows,

  1. HyperFlex HX220c M4 (UCS C220 rack mount server based)
    • CPU = 2 x Intel Xeon E5 2600 v3 processors
    • Memory  = 256Gb to 512 Gb 2133 MHz DIMMs
    • Caching layer = 480GB high-endurance (Intel 3610) cache SSD
    • HDD = 6 x 1.2 TB 10,000 RPM 12-Gbps SAS disks
    • Network = Cisco VIC 1227 (10gbe x 2)
    • Software
      • VMware 5.5 or 6.0 u1
      • Cisco HyperFlex HX Data Platform Software version 1.7
    • Cluster
      • Nodes: Minimum of 3 nodes, Maximum 8 nodes (Initial version. Will increase in future)
      • Management: Cisco UCS Manager and vCenter plugin
  2. HyperFlex HX240c M4 (UCS C240 rack mount server based)
    • CPU = 2 x Intel Xeon E5 2600 v3 processors
    • Memory = 256 Gb to 784 Gb 2133 MHz DIMMs
    • Cache = 1.6-Tb high-endurance (Intel 3610) cache SSDs
    • HDD = 15 x 1.2 TB 10K RPM 12gbps SAS disks (8 additional disks supported through SAS expander)
    • Network = Cisco VIC 1227 (10gbe x 2)
    • Software
      • VMware 5.5 or 6.0 u1 (VMware only on day 1. Additional hypervisor may follow)
      • Cisco HyperFlex HX Data Platform Software version 1.7
    • Cluster
      • Nodes = Minimum of 3 nodes, max 8 nodes per ESXi Cluster (Will likely increase in future).
      • Management = Cisco UCS Manager and vCenter plugin

* Note that the hardware configuration on all nodes MUST be same (cluster validation fails otherwise)

 

Key Architecture Points

2

  • The HyperFlex SDS offering is a VSA (Virtual Storage Appliance)
    • In other words, its very similar to Nutanix or Simplivity’s implementation as its just a VM running on a Hypervisor (ESXi) that acts as a virtual storage appliance.
    • This is markedly different to VMware VSAN which is kernel based and may offer better scalability (though there are pros and cons to both in kernel and VSA type of architectures)
    • Springpath software will create a distributed NFS datastore that is made available to all the hosts in the cluster (It will rely on a control VM residing on each ESXi host to serve the IO).
    • Note however that unlike Simplivity’s offering, there’s no dedicated hardware to offload any de-dupe or compression work to so its all done in the SW (tax on CPU cycles)
  • Launch & Initial positioning
    • Cisco launched this solution internally within Cisco on the 25th of February followed by the public launch on the 1st of March
    • As a version 1 product, the initial target market for Cisco HF would be,
      • VDI deployments
      • Small to medium scale virtualisation (vSphere only) environments
      • Test and Dev environments
      • Branch office requirements
    • VMware vSphere based only
      • Future roadmap may include other hypervisors naturally
      • Support VAAI
      • vVol support is not there on day 1 but on the roadmap
      • vCenter plugin available
    • No native replication
      • Relies on application level data replication such as that of VMware vSphere replication or Zerto
    • Inline de-duplication is available on day 1 (always on. Approx 20-30% savings)
    • Inline compression (during de-staging from SSD to HDD) is also available on day 1 (variable block size, 30-50% approx. savings)
  • Scalability (day 1)                       
    • Independent compute nodes independently from storage nodes
      • 3-8 HF nodes per cluster + up to 4 additional compute nodes (has to be UCS B200 blades only)
      • When adding compute only nodes, HF cluster will automatically push a software component, referred to as IOVisor on to the new compute nodes (in the form of an ESXi VIB)
    • Hybrid HCI (SSD for caching and HDD for capacity) on day 1
      • No all flash offering available for now
      • Similar to competition, all disks are in pass-through mode (no local RAID)
  • Unique Selling Points
    • Unlike other HCI offerings in the market that typically do not include the networking components, Cisco HF solutions include the Networking elements. Full content included within the HF bundles are as follows
      • Compute Nodes with local storage
        • Comes with ESXi pre-installed
        • A wizard driven installer VM is used to simplify the initial deployment
      • Software license subscription for SDS (Yes you read it correct, its not a perpetual license but only a annual or 3 year subscription that need to be renewed)
      • Cisco UCS Fabric Interconnects for server HW managements
        • Pre-configured for rapid deployment
        • Include single UCS domain license
        • 48 or 96 port option for FI’s
        • Complete UCS manager software (used for hardware management only and no Springpath SDS management capabilities will be included on day 1)
    • Unlike all other competitive offerings, Cisco HF uses a dedicated 120GB SSD (separate from caching SSD’s) for data logging (meta data) on each host which should help with performance & scalability
  • Ordering Options
    • Orderable through normal channel partners as per usual process
    • Pre-Defined bundles available
    • Configure to order option also available

It is important to note that Cisco HyperFlex is not offered as a replacement or an alternative to converged architecture solutions that Cisco already excels in, such as FlexPod or VBlock but only offered as another silo option for appropriate use cases. Industry analysts predict that the Hyper-Converged market may be worth in the region of $3 billion and this is Cisco’s answer for their customers.

The marketing message around HF is going to be focused around its Simplicity, Speed to provision and scalability (linear, node based) which is no different to other HCI vendors such as Nutanix.

My Take

I think HyperFlex is a good version 1 HCI solution from Cisco and I like number of things it has to offer such as its cheaper cost (in comparison to competition) and the fact that it automatically include the networking and Fabric Interconnect modules within this cost. Architecturally it looks solid too, however there are some minor things that need to be addressed / improved which I’m sure will get addressed as the product evolves (quite normal for a version 1 product). Its designed from ground up to ensure the availability and the integrity of your data which means if there are multiple simultaneous node failures for example, that takes the HF cluster beyond the configured availability (replication) levels (similar to VSAN FTT) it will offline the cluster to ensure the integrity of your data.

However being pragmatic, I would personally like to wait and see how this performs out in the field with real customer data under normal working conditions. While established solutions like VMware VSAN may provide fully integrated HCI solutions for vSphere at a much more deeper level than a VSA based solution would provide, if you are a Cisco house and are happy with UCS server hardware (who wouldn’t btw? they are just awesome…!!), this solution may appeal to you quite easily.

I would urge you to register for the webcast (link here) to find out more or reach out to your Cisco AM or reseller to find out more (my employer Insight can help too)

In the meantime, additional information can be found here

Image credit goes to Cisco…!!