VMware VSAN 6.2 Performance & Storage savings

Just a quick post to share some very interesting performance stats observed on my home lab VSAN cluster (Build details here). The VSAN datastore is in addition to a few NFS datastores also mounted on the same hosts using an external Synology SAN.

I had to build a number of Test VMs, a combination of Microsoft Windows 2012 R2 Datacenter and 2016 TP4 Datacenter VMs on this cluster and I placed all of them on the VSAN datastore to test the performance. See below the storage performance stats during the provisioning (cloning from template) time. Within the Red square are the SSD drive performance stats (where the new VM’s being created) Vs Synology’s NFS mount’s performance  stats (where templates resides) in the Yellow box.

Provisioning Performance

Pretty impressive from all Flash VSAN running on a bunch of white box servers with consumer grade SSD drives (officially unsupported of course but works!), especially relative to the performance of the Synology NFS mounts (RAID1/0 setup for high performance), right??

Imagine what the performance would have been if this was on enterprise grade hardware in your datacentre?

Also caught my eye was the actual inline deduplication and compression savings immediately available on the VSAN datastore after the VM’s were provisioned.

Dedupe & Compression Savings

As you can see, to store 437GB of raw data, with a FTT=1 (where VSAN keeping redundant copies of each vmdk file), its only consuming 156GB of actual storage on the VSAN cluster, saving me 281GB of precious SSD storage capacity. Note that this is WITHOUT Erasure Coding RAID 5 or RAID 6 that’s also available with VSAN 6.2 which, had that been enabled, would have further reduced the actual consumed space more.

The point of this all is the performance and the storage savings available in VSAN, especially all flash VSAN is epic and I’ve seen this in my own environment. In an enterprise datacenter, All Flash VSAN can drastically improve your storage performance but at the same time, significantly cut down on your infrastructure costs for all of your vSphere storage environments. I personally know a number of clients who have achieved such savings in their production environments and each and every day, there seem to be more and more demand from customers for VSAN as their preferred storage / Hyper-Converged technology of choice for all their vSphere use cases.

I would strongly encourage you to have a look at this wonderful technology and realise these technical and business benefits (summary available here) for yourself.

Share your thoughts via comments below or feel free to reach out to discuss what you think via email or social media




Technologist, lucky enough to be working for a very technical company. Views are my own and not those of my employer..!


  1. If I use a M2 PCIe NVMe SSD drive for the cache tier and a single SATA SSD for the capacity tier, can I get 300-400MB/s speeds with this type of setup?

    Also, doesn’t the checksumming slow things down?

    • Yes easily is the simple answer.

      NVMe’s are supported with VSAN so no issue using that for caching tier. The level of performance will depend primarily on your workload profile (Read / Write split, random Vs Sequential & the block size factor…etc.) as well as the capabilities of the NVMe drive used. So from a throughput perspective, yes NVMe will likely perform better than normal SSD’s in the caching tier which already can do significant amount of IOPS, and this IOPS factor is primarily dictated by the NVMe / SSD drive class. So for example, if you assume a basic class D SSD drive (non NVMe) that can do 30K IOPS, assuming a 16K block size (driven by your underlying guest level workload), that’s a total of 30,000*16,384 bytes = 468.75MB/s which will easily be bettered by most if not all NVMe’s which are generally better than SSDs. Most VSAN deployments I see typically use class E SSD’s from server vendors for caching tier which can do up to 60,000 IOPS which means most NVMe’s (that would have similar or higher IOPS characteristics) will also perform better than your stated 400MB/s.

      However note that you are using the Mbps figure here which is better suited for bandwidth rather than throughput. So if you consider the bandwidth also, 400MB/s is easily doable. In an all flash VSAN, only write IO’s are cached and read IO’s are directly served from capacity tier. And the maximum write buffer size (current version of VSAN) is 600GB which means as long as your NVMe drive is bigger than 400MB you are fine, even from a bandwidth perspective.

      Hope this make sense

Leave a Reply

Your email address will not be published. Required fields are marked *