VMware VSAN 6.2 Performance & Storage savings

Just a quick post to share some very interesting performance stats observed on my home lab VSAN cluster (Build details here). The VSAN datastore is in addition to a few NFS datastores also mounted on the same hosts using an external Synology SAN.

I had to build a number of Test VMs, a combination of Microsoft Windows 2012 R2 Datacenter and 2016 TP4 Datacenter VMs on this cluster and I placed all of them on the VSAN datastore to test the performance. See below the storage performance stats during the provisioning (cloning from template) time. Within the Red square are the SSD drive performance stats (where the new VM’s being created) Vs Synology’s NFS mount’s performance  stats (where templates resides) in the Yellow box.

Provisioning Performance

Pretty impressive from all Flash VSAN running on a bunch of white box servers with consumer grade SSD drives (officially unsupported of course but works!), especially relative to the performance of the Synology NFS mounts (RAID1/0 setup for high performance), right??

Imagine what the performance would have been if this was on enterprise grade hardware in your datacentre?

Also caught my eye was the actual inline deduplication and compression savings immediately available on the VSAN datastore after the VM’s were provisioned.

Dedupe & Compression Savings

As you can see, to store 437GB of raw data, with a FTT=1 (where VSAN keeping redundant copies of each vmdk file), its only consuming 156GB of actual storage on the VSAN cluster, saving me 281GB of precious SSD storage capacity. Note that this is WITHOUT Erasure Coding RAID 5 or RAID 6 that’s also available with VSAN 6.2 which, had that been enabled, would have further reduced the actual consumed space more.

The point of this all is the performance and the storage savings available in VSAN, especially all flash VSAN is epic and I’ve seen this in my own environment. In an enterprise datacenter, All Flash VSAN can drastically improve your storage performance but at the same time, significantly cut down on your infrastructure costs for all of your vSphere storage environments. I personally know a number of clients who have achieved such savings in their production environments and each and every day, there seem to be more and more demand from customers for VSAN as their preferred storage / Hyper-Converged technology of choice for all their vSphere use cases.

I would strongly encourage you to have a look at this wonderful technology and realise these technical and business benefits (summary available here) for yourself.

Share your thoughts via comments below or feel free to reach out to discuss what you think via email or social media

Thanks

Chan

VMware All Flash VSAN Implementation (Home Lab)

I’ve been waiting for a while to be able to implement an all flash VSAN in my lab and now that VSAN 6.2 has been announced, I thought it would be time to upgrade my capacity disks from HDD’s to SSD’s and get cracking..! (note: despite the announcement, VSAN 6.2 binaries are NOT YET available to download. I’m hearing it would be available in a week or two on My VMware though so until then, mine is based on VSAN 6.1 – ESXi 6.0U1 binaries)

As I already had a normal (Hybrid) VSAN implementation using SSD+HDD in my management vSphere cluster, the plan was to keep the existing SSD’s as caching tier and replace the current HDD’s with high capacity SSD drives. So I bought 3 new Samsung 850 EVO 256GB drives from Amazon (here)                                       Capture

All Flash VSAN Setup

Given below are the typical steps involved in the processes to implement All Flash VSAN within the VMware cluster (I’m using the 3 node management cluster within my lab for the illustration below)

  1. Install the SSD drives in the server – This should be easy enough. If you are doing this in a production environment, you need to ensure that the capacity SSD’s (similar to all other components in your VSAN ready nodes)  are in the VMware HCL
  2. Enable VSAN on the cluster – Need to be done on the web client      1 - Enable VSAN
  3. Verify the new SSDs are available & recognised within the web client – All SSD’s are recognised as caching disks by default.              0 - Default disk assignment  2 - all caching
  4. Manually tag the required SSD drives as capacity disks VIA COMMANDLINE for them to be recognised as capacity disks within VSAN configuration – This step MUST be carried out using one the ways explained below and until then, SSD disks WILL NOT be available to be used as capacity disks within an all flash VSAN otherwise. (There currently is no GUI option on the web client to achieve this and cli must be used)
    1. Use esxcli command on each ESXi server
      1. SSH in to the ESXi server shell
      2. Use the vdq -q command to get the T10 SCSI name for the capacity SSD drive (Also verify “IsCapacityFlash” option is set to 0) 3 SSH
      3. Use the “esxcli vsan storage tag add -d <SCSI T10 name of the disk> -t capacityFlash” command to mark the disk as capacity SSD.   4 ESXCLI
      4. Use the vdq -q command to query the disk status and ensure the disk is now marked as “1” for “IsCapacityFlash” 5 esxcli verify
      5. If you now look at the Web client UI, the capacity SSD disk will now have been correctly identified as capacity (note the drive type changed to HDD which is somewhat misleading as the drive type is still SSD) 8.1 GUI
    2. Use the “VMware Virtual SAN All-Flash Configuration Utility” software – This is a 3rd party tool and not an officially supported VMware tool but if you do not want to manually SSH in to the ESXi servers 1 by 1, this software could be quite handy as you can bulk tag on many ESXi servers all at the same time. I’ve used this tool to tag the SSD’s in the next 2 servers of my lab in the illustration below. xx - Use VMware Virtual SAN all-flash configuration utility
  5. Verify capacity SSD across all hosts – Now that all the capacity SSD’s have been tagged as capacity disks, verify that the web client sees all capacity SSD’s across all hosts                                                9 Disk group manual
  6. Create the disk groups on each host – I’m opting to create this manually as shown below 9 Disk group manual 10 - Verify disk groups
  7. Verify the VSAN datastore now being available and accessible 11 - VSAN datastore active

There you have it. Implementing all flash VSAN requires manually tagging the SSDs as capacity SSDs for the time being and this is how you do it. I may also add that since the all flash VSAN, my storage performance has gone through the roof in my home lab which is great too. However this is all done on Whitebox hardware and not all of them are fully on VMware HCL….etc which makes those performance figures far from optimal. It would be really good to see performance statistics if you have deployed all flash VSAN in your production environment.

Cheers

Chan