Tag Archives: SDS

A look at the Hedvig distributed Hybrid Cloud storage solution

During the recently concluded Storage Field Day event (SFD15), I had the chance to travel to the Software Defined Storage company Hedvig in their HQ in Santa Clara where were given a presentation by their engineering team (including the founder) of their solution offering. Now luckily, I knew Hedvig already due to my day job (which involves evaluating new, disruptive tech start-ups to form solutions reseller partnerships – I had already gone through this process with Hedvig a while back). However I learnt about a number of new updates to their solution and this post aims to cover their current solution offering and my thoughts of it, in the current enterprise storage market.

Hedvig: Company Overview

Similar to a number of new storage or backup start-ups came out of stealth in recent times, Hedvig too was founded by an engineer with a technology background, back in 2012. The founder Avinash Lakshman came from a distributed software engineering background, having worked on large scale distributed storage applications such as Amazon Dynamo and Cassandra. While they came out of stealth in 2015, they did not appear to have an aggressive growth strategy backed by an aggressive (read “loud”) marketing effort behind them and looked rather content at natural, organic growth. At least that was my impression seeing how they operated in the UK market anyway. However, during the SFD15 presentation, we found out that they’ve somewhat revamped their logo and related marketing collateral. So perhaps they may well have started to address this already?

Hedvig: Solution Overview


At the outset, they are similar to most other software defined storage start-ups these days that leverages any commodity server hardware on top of their software tier to build a comparatively low cost, software defined storage (SDS) solution. They also have genuine distributed capability to be able to distribute the SDS nodes not just within the data center, but also across data centers as well as cloud platforms, though it’s important to note most SDS vendors these days have got the same capability or are in the process of adding it to their SDS platforms.

Hedvig has positioned themselves as a SDS solution that is a perfect fit for traditional workload such as VMs, backup & DR as well as modern workloads such as Big data, HPC, object storage and various cloud native workloads too. Their solution provides block & file storage capability like most other vendors in their category, as well as object storage which is again another potentially (good) differentiator, especially compared to some of the other HCI solutions out there that often only provide one type or the other.

The Hedvig storage platform typically consist of Hedvig SW platform + commodity server hardware with local disks. Each server node can be a physical server or a VM on a cloud platform that runs the Hedvig software. The Hedvig software consist of,

  • Hedvig Storage Proxy
    • This is a piece of software deployed on the compute node (app server, container instance, hypervisor…etc.)
    • Presents file (NFS) & block (iSCSI) storage to compute environments and coverts that to Hedvig proprietary communication protocol with storage service.
    • Also performs caching of reads (writes are redirected).
    • Performs dedupe up front and writes deduped blocks to the back end (storage nodes) only if necessary
    • Each hypervisor runs a proxy appliance VM / VSA (x2 as a HA pair) which will serve all local IO on that hypervisor
  • Hedvig API
    • Presents object storage via S3 or Swift and full RESTful API from the storage nodes to the storage proxy.
    • Runs on the storage nodes
  • Hedvig Storage Services
    • Manages the storage cluster activities and interface with server proxies
    • Runs on the storage nodes and similar to the role of a typical storage processor / SAN or NAS controller
    • Each storage server has 2 parts
      • Data process
        • Local persistence
        • Replication
      • Metadata process
        • Communicate with each other
        • Distributed logic
        • Stored in a proprietary DB on each node
    • Each virtual disk provisioned in the front end is mapped 1:1 to a Hedvig virtual disk in the back end

The Hedvig storage nodes can be commodity or mainstream OEM vendor servers as customer’s chose to use. They will consist of SSD + Mechanical drives which is typical for other SDS vendors too and the storage nodes which runs the Storage services SW will typically be connected to each other using 10Gbe (or higher) standard Ethernet networking.

Like most other SDS solutions, they have typical SDS features and benefits such as dedupe, compression, auto-tiering, caching, snapshots & clones, data replication…etc. Another potentially unique offering they have here is the ability to set storage policies per virtual disk or per container granularity (in the back end), which is nice. The below are some of key storage policy configuration items that can be set per VM / vDisk granularity.

  • Replication Factor (RF) – Number of copies of the data to keep. Range form 1-6. Quorum = (RF/2)+1. This is somewhat similar to the VMware vSAN FTT if you are a vSAN person.
  • Replication policy – Agnostic, Rack aware and DC aware – Kind of similar to the concept of Fault Domains in vSAN for example. Set the scope of data replication for availability
  • Dedupe – Global dedupe across the cluster. Happens at 512B or 4K block size and is done in-line. Important to node that dedupe happens at the storage proxy level which is ensures no un-necessary writes take place in the back end. This is another USP compared to other SDS solution which is also nice.
  • Compression
  • Client caching
  • …etc.

Data replication, availability & IO operations

Hedvig stores data as containers across the cluster nodes to provide redundancy and enforce the policy configuration items regarding availability at container level. Each vDisk is broken down to 16GB chunks and based on the RF level assigned to the vDisk, will ensure the number of RF copies are maintained across a number of nodes (This is somewhat similar to VMware vSAN component size which is set at 256GB). Each of these 16GB chunks is what is known as a container. Within each node, Hedvig SW will group 3 disks in to a logical group called a storage pool and each container that belong to that storage pool will typically stripe the data across that storage pool’s disks. Storage pool and disk rebalancing occurs automatically during less busy times. Data replication will also take in to account the latency considerations if the cluster spans across multiple geo boundaries / DCs / Cloud environments.

Hedvig software maintains an IO locality in order to ensure best performance for read and write IOs where it will prioritise servicing IO from local & less busy nodes. One of the key things to note that during a write, the Hedvig software doesn’t wait for all the acknowledgement from all the storage nodes unlike some of its competitor solutions. As soon as the quorum is met (Quorum = RF/2 + 1, so if the RF is 4, with a remote node on the cloud or on a remote DC over a WAN link, as soon as the data is written locally to 3 local nodes), it will send the ACK back to the sender and the rest of the data writing / offloading can happen in the background. This ensures the faster write response times, and is probably a key architectural element in how they enable truly distributed nodes in a cluster, which can often include remote nodes over a low latency link, without a specific performance hit to a write operation. This is another potential USP for them, at least architecturally on paper, however in reality, will only likely to benefit if you have a higher RF factor in a large cluster.

Reads are also optimised through using a combination of caching at the storage proxy level as well as actual block reads in the back end prioritising local nodes (with a lower cost) to remote nodes. This is markedly different to how VMware vSAN works for example where it avoids the client-side cache locality in order to avoid skewed flash utilisation across the cluster as well as frequent cache re-warning during VMotion…etc. Both architectural decisions have their pros and cons in my view and I like Hedvig’s architecture as it optimises performance which is especially important in a truly distributed cluster.

A deep dive on this subject including the anatomy of a read and a write is available here.

Hedvig: Typical Use Cases

Hedvig, similar to most of its competition, aim to address number of use cases.

Software Defined Primary Storage

Hedvig operates in traditional storage mode (dedicated storage server nodes providing storage to a number of external compute nodes such as VMware ESXi or KVM or even a bare metal application server) or in Hyper-Converged mode where both compute and storage is provided on a single node. They also state that these deployment architectures can be mixed in the same cluster which is pretty cool.

  • Traditional SDS – Agent (storage proxy) running on the application server accessing the storage and speaks storage protocols. Agent also host local metadata and provide local caching amongst other things. Used in a non-hypervisor deployment such as bare metal deployments of app servers.
  • HCI mode – Agent (storage proxy) running on the Hypervisor (as a control VM / VSA – Similar to Nutanix). This is probably their most popular deployment mode.

Software Defined Hybrid Cloud Storage

Given the truly distributed nature of Hedvig solution platform, they provide a nice Hybrid cloud use case for the complete solution to extend the storage cluster across geographical boundaries including cloud platforms (IaaS instances). Currently supported cloud platforms by Hedvig include AWS, Azure and GCP. Stretching a cluster over to a cloud platform would involve IaaS VMs from the cloud platform being used as cluster nodes with available block storage from the cloud platform providing virtual disks as local drives for each cloud node. When you define Hedvig virtual disks, you get specify the data replication topology across the hybrid cloud. Important to note though that the client accessing those disks will be advised to be run within the same data center / cloud platform / region for obvious performance reasons.

Hedvig also now supports integrating with Docker for containerised workloads through their Docker volume plugin & integration with Kubernetes volume integration framework, similar to most of the other SDS solutions.

Hyper-Converged Backup

This is a something they’ve recently introduced but unless I’ve misunderstood, this is not so much a complete backup solution including offsite backups, but more of a snapshot capability at the array level (within the Hedvig layer). Again, this is similar to most other array level snapshots from other vendor’s solutions and can be used for immediate restores without having to rely on a hypervisor snapshot which would be inefficient. An external backup solution using a backup partner (such as Veeam for example) to offsite those snapshot backups is highly recommended as with any other SDS solution.

My thoughts

I like the Hedvig solution and some of its neat littles tricks such as the clever use of the storage proxy agent to offload some of the backend storage operations to the front end (i.e. dedupe) and therefore potentially reduce back end IO as well as network performance penalty to a minimum between the compute and storage layers. They are a good hybrid SDS solution that can cater for a mixed workload across the private data center as well as public cloud platforms. It’s NOT a specialised solution for a specific workload and doesn’t claim to provide a sub millisecond latency solution and instead, provide a good all-around storage solution that is architected from ground up to be truly distributed. Despite its ability to be used in a traditional storage as well as HCI mode, most of the real-life applications of its technology however, would likely be in a HCI world, with some kind of a Hyper-visor like vSphere ESXi or KVM.

Looking at the organisation itself and their core solution, it’s obvious that they’ve tried to solve a number of hardware defined storage issues that were prevalent in the industry at the time of their inception (2012), through the clever use of software. That act is commendable. However, the sad truth is that, since then, a lot has happened in the industry and a number of other start-ups and established vendors have also attempted to do the same, some with perhaps an unfair advantage due to having their own hypervisor too, which is a critical factor when it comes to your capabilities. Nutanix and VMware vSAN for example, developed similar SDx design principles and tried to address most of the same technical challenges. I fear that those vendors and their solutions were little aggressive in their plans and managed to get their go to market process right in my view, at a much bigger scale as well. Nutanix pioneered in creating a new SDS use case (HCI) in the industry and capitalised on it before everyone else did and VMware vSAN came out as a credible, and potentially better challenger to dominate this space. While Hedvig is independent from a hypervisor platform and therefore provide same capabilities across multiple platforms, the reality is that not many customers would need that capability as they’d be happy with a single Hypervisor & a storage platform. I also think Hedvig potentially missed a trick in their solution positioning in the market to create a differentiated message and win market share. As a result, their growth is nowhere near comparable to that of VMware vSAN or Nutanix for example.

As much as I like the Hedvig technology, I fear for their future and their future survival. Without some significant innovation and some business leadership involved in setting a differentiated strategy for their business, life would be somewhat be difficult, especially if they are to make a commercial success out of the as a company. Their technology is good and engineering team seems credible, but the competition is high and the market is somewhat saturated with so many general purpose SDS solutions as well as specialist SDS solutions aimed at specific workloads. Most of their competition also have much more resources at their disposal to throw at their solution, including more comprehensive marketing engines too. For these reasons, I fear that Hedvig may struggle to survive in their current path of generalised SDS solution and would potentially be better off in focusing on a specific use case / vertical …etc and focusing all their innovation efforts on that.

The founder and the CEO of the company still appears to be very much an engineer at heart still and having an externally sourced business leader with start-up experience to lead Hedvig in to the future may not be a bad thing for them in the long run either, in my view.

Keen to get your thoughts, especially if you are an existing Hedvig customer – Please comment below.

Slide credit goes to Hedvig and Tech Field Day team.

P.S. You can find all the TFD and SFD presentations about Hedvig via the link here.

Chan

Dropbox’s Magic Pocket: Power of Software Defined Storage

Background

Dropbox is one of the poster boys of the modern-day tech start-ups, similar to the Uber’s and the Netflix’s of the world that was founded by engineers using their engineering prowess to help consumers around the world address various day to day challenges using technologies in a novel way. So, when I was informed that not only Dropbox would be presenting at the SFD15, but we’d also get to tour their state of the art data center, I was ecstatic. (perhaps ecstatic is an understatement!). I work with various technology vendors, from large vendors like Microsoft, Amazon, VMware, Cisco, NetApp…. etc to little known start-ups and Dropbox’s name is often mentioned in event keynote speeches, case studies…etc by most of these vendors as a perfect example of how a born in the cloud organisation can use modern technology efficiently. Heck, they are even referenced in some of the AWS training courses I’ve come across on Pluralsight that talk about Drobox’s ingenious way of using AWS S3 storage behind the scene to store file data content.

So, when I learned that they have designed and built their own Software Defined Storage solution to bring back most of their data storage from AWS on to their on data centres, I was quite curious to find out more details of the said platform and the reasoning behind the move back to on-premises. Given it’s the first time their engineering team openly discussed things, I was looking forward to talking their engineering team at the event.

This post summarises what I learnt from the Dropbox team.

Introduction

I don’t think it’s necessary to introduce Dropbox to anyone these days. If, however you’ve been under a rock for the past 4 years, Dropbox is the pioneering tech organisation from the Silicon Valley that built an online content sharing and a collaboration platform that allows you to synchronise content between various end user devices automatically while letting you access them on any device, anywhere. During this process of data synchronisation and content sharing, they are dealing with,

  • 500+ million users
  • 500+ Petabytes of data storage
  • 750+ billion API calls handled

When they first went live, Dropbox used AWS’s S3 storage (PaaS) to store the actual user file data behind the scene, while their own web servers were used to host the metadata about those files and users. However, as their data storage requirements grew, the necessity to change this architecture was starting to outweigh the benefits such as the agility and ease provided by leveraging AWS cloud storage. As such, Dropbox decided to bring this file storage back in to their own data center on-premises. Dropbox states 2 unique reasons behind this decision: Performance requirements and the raw storage costs. Given the unique use case they have for block storage at extremely high scale, by designing a tailor-made cloud storage solution of their own engineered to provide maximum performance at the lowest unit cost, Dropbox was planning on saving a significant amount of operational costs. As a private company that is about to go in to a public IPO, saving costs was obviously high on their agenda.

Magic Pocket: Software Architecture

While the original name came from an old internal nick name to Dropbox itself, Magic Pocket (MP) now refers to their custom built, internally hosted, software defined, cloud storage infrastructure that is now used by Dropbox to host majority of their user’s file data. This is multi-exabytes in size, with data being fully replicated for availability and has a high data durability (12 x 9’s) and high availability (4 x 9’s).

Within the MP architecture, files are stored in to blocks and replicated across their geo boundaries within their internal infrastructure (back end storage nodes) for durability and availability. The data stored in the MP infrastructure consist of 4mb blocks that are immutable by design. Changes to the data in the blocks are tracked through a File Journal that is part of the metadata held on the Dropbox application servers. Due to the temporal locality of the data, bulk of the static data that are cold, are stored on high capacity, high latency but cheap spinning drives while meta data, cache data & DB’s are kept on high performance low latency, but expensive SSDs.

Unlike most enterprise focused Software Defined Storage (SDS) solutions that utilises some kind of quorum style consensus or distributed coordination to ensure data availability and integrity, MP utilises a simple, centralised, sharded MySQL cluster which is a bit of surprise. Data redundancy is made available through…yeah you guessed it! Customised Erasure coding, similar to many other enterprise SDS solutions however. Data is typically replicated at 1GB chunks (known as buckets) that consist of random, often contiguous 4K blocks. A bucket would replicate or Erasure coded across multiple physical servers (storage nodes) and a set of 1 or more buckets replicated to a set of nodes makes up a volume. This architecture is somewhat similar to how the enterprise SDS vendor Hedvig store their data in the back end.

In Dropbox’s SDS architecture, a pocket is similar to a fault domain in other enterprise SDS solutions and is a geographical zone (US east, US west & US Central for example). Each zone has a cluster of storage servers and other application servers and data blocks are replicated across multiple zones for availability. Pretty standard stuff so far.

Dropbox has a comprehensive Edge network which is geographically dispersed across the world to funnel all customer Drobox application’s connectivities through. The client connectivity path is Application (on user device) -> local pop (proxy servers in an edge location) > Block server > Magic Pocket infrastructure servers > Storage nodes. While the proxy servers in edge locations don’t store any caching of data and can almost be thought of as typical Web servers the clients connect through, the other servers such as Block/MP/Storage nodes servers are ordinary X86 servers stores within Dropbox’s own DCs. These servers are multi sourced as per best practise, and somewhat customised for Dropbox’s specific requirements, especially when it comes to storage node servers. Storage nodes are customised, high density, storage nodes with a capacity to have around 1PB of raw data in each server using local disks. All servers run a generic version of Ubuntu and runs bare metal rather than as VM’s.

Inside each zone, application servers such as Block & Magic Pocket app & db servers act as gateways for storage requests coming through the edge servers. These also hosts the meta data mapping for block placement (block index) in the backend and runs sharded MySQL clusters to store this information (running on SSD storage). Cross zone replication is also initiated in an asynchronous manner within this tier.

A cell is a logical entity of physical storage servers (a cluster of storage nodes) and that defines the core of the Dropbox’s proprietary storage backend which is worth a closer look. These have very large local disks and each storage server (node) consist of around 1PB of storage. These nodes are used as dumb nodes for block level data storage. Replication table, which runs in memory as a small MySQL DB stores the mapping of logical Bucket <-> Volume <-> Storage nodes. This is also part of the metadata stack and is stored on app / db servers with SSD storage.

Master is the software component within each cell that is acting as a janitor and performs back end tasks such as storage node monitoring, creating storage buckets, and other background maintenance operations. However the Master is not on the data plane so doesn’t affect the immediate data read / write operations. There’s a 1:1 mapping between master : Cell. Volume manager (another software component) can be thought of as the data movers / heavy lifters responsible for handling instructions from Master and performing operations accordingly on the storage nodes. Volume manager runs on the actual storage nodes (Storage servers) in the back end.

The front end (interface to the SDS platform) supports simple operations such as Put, Get and Repair. (Details of how this works can be found here)

Magic Pocket: Storage Servers

Dropbox’s customized, high density storage servers make up the actual back end storage infrastructure. Typically each server has a 40GB NIC, around 90 x high capacity enterprise SATA drives as local disks totalling up to around 1PB of raw space per node, runs a bare metal Ubuntu Linux with the Magic Pocket SDS application code and their life cycle management is heavily automated using proprietary and custom built tools. This set up provides a significantly large fault domain per each storage node given the huge capacity of each, but the wider SDS application and network load balancing capabilities architected in the application itself ensure mitigate or design against a complete failures of each server or a cell. We were treated to a scene of observing how this works in action when these engineering team decided to randomly pull the networking cables out while we were touring the DC, and then also cut the power to a full rack which had zero impact on the normal operations of Dropbox’s service. That was pretty cool to see.

My thoughts

Companies like Dropbox inspire me to think outside of the box when it comes to what is possible and how to address modern day business requirements using innovative ways using technology. Similar to the session on Open19 project (part of the Open Compute Project) from the LinkedIn engineering team during SFD12 event last year, this session has also hugely inspired me about the power of software & Hardware engineering and, the impact initiatives like this can have on the wider IT community at large, that we all live and breathe.

As for the Magic pocket SDS & HW architecture… I am a big fan and its great to see organisations such as Dropbox and Netflix (CDN architecture) who epitomises extreme ends of certain use cases, publicly opening up about the backend IT infrastructure that are powering their solutions so that 99% of the other enterprise IT folks can learn and adapt from those blueprints where relevant.

It is also important to remember though, for normal organisations with typical enterprise IT requirements, such custom-built solutions will not be practical nor would they be required and often, the best they’d need can be met with a similarly architected, commercially available Software Defined Storage solution and tailor to meet their requirements. The most important part here though is to realise the power of Software Defined Storage here. If Dropbox can meet their extreme storage requirements through a Software Defined Storage solution that operate on a lower cost premium than a proprietary storage solution, the average corporate or enterprise storage use cases do not have any excuse to keep buying expensive SAN / NAS hardware with a premium price tag. Most enterprise SDS storage solutions (VMware vSAN, Nutanix, Hedvig, Scality…etc.) all have a very similar software and a hardware architecture to that of Dropbox’s and carries a lower cost price point compared to expensive hardware centric storage solutions from the big vendors like EMC, NetApp, HPe, IBM…etc. So why not look in to a SDS solution to if your SAN / NAS is up for a renewal? You can very likely save significant costs and at the same time, benefit from a software defined innovation which tends to comes quicker when there’s no proprietary hardware baggage.

Given Dropbox’s unique scale and storage size, they’ve made a conscious decision to move away for the majority of their storage requirements from AWS (S3 storage) as it they’ve gone past the point where using cloud storage was not economical nor performant enough. But it is also important to remember that they only got to that point through the growth of their business which at the beginning, was only enabled by the agility provided by the very same AWS S3 cloud storage platform they decided to move away from. Most organisations out there are nowhere near the level of scale like Dropbox and therefore its important to remember that for your typical requirements, you can benefit significantly through the clever use of cloud technologies, especially PaaS technologies such as AWS S3, AWS Lambda, Microsoft O365, Azure SQL that provide a ready to use technology solutions platform without you having to build it all from the scratch. In most cases, that freedom and the speed of access can be a worthy trade-off for a slightly higher cost.

Keen to get your thoughts – get involved via comments button below!

Image credit goes to Dropbox!

Chan

VMworld 2017 – vSAN New Announcements & Updates

During VMworld 2017 Vegas, a number of vSAN related product announcements will have been made and I was privy to some of those a little earlier than the rest of the general public, due being a vSAN vExpert. I’ve summerised those below. The embargo on disclosing the details lifts at 3pm PST which is when this blog post is sheduled to go live automatically. So enjoy! 🙂

vSAN Customer Adoption

As some of you may know, popularity of vSAN has been growing for a while now as a preferred alternative to legacy SAN vendors when it comes to storing vSphere workloads. The below stats somewhat confirms this growth. I too can testify to this personally as I’ve seen a similar increase to the number of our own customers that consider vSAN as the default choice for storage now.

Key new Announcements

New vSAN based HCI Acceleration kit availability

This is a new ready node program being announced with some OEM HW vendors to provide distributed data center services for data centers to keep edge computing platforms. Consider this to be somewhat in between vSAN RoBo solution and the full blown main data center vSAN solution. Highlights of the offering are as follows

  • 3 x Single socket servers
  • Include vSphere STD + vSAN STD (vCenter is excluded)
  • Launch HW partners limited to Fujitsu, Lenovo, Dell & Super Micro only
  • 25% default discount on list price (on both HW & SW)
  • $25K starting price

           

 

  • My thoughts: Potentially a good move an interesting option for those customers who have a main DC elsewhere or are primarily cloud based (included VMware Cloud on AWS). The practicality of vSAN RoBo was always hampered by the fact that its limited to 25 VMs on 2 nodes. This should slightly increase that market adoption, however the key decision would be the pricing. Noticeably HPe are absent from the initial launch but I’m guessing they will eventually sign up. Note you have to have an existing vCenter license elsewhere as its not included by default.

vSAN Native Snapshots Announced

Tech preview of the native vSAN data protection capabilities through snapshots have been announced and will likely be generally available in FY18. vSAN native snapshots will have the following characteristics.

  • Snapshots are all policy driven
  • 5 mins RPO
  • 100 snapshots per VM
  • Support data efficiency services such as dedupe as well as protection services such as encryption
  • Archival of snapshots will be available to secondary object or NAS storage (no specific vendor support required) or even Cloud (S3?)
  • Replication of snapshots will be available to a DR site.

  • My thoughts: This was a hot request and something that was long time coming. Most vSAN solutions need a 3rd party data center back up product today and often, SAN vendors used to provide this type of snapshot based backup solution from the array (NetApp Snap Manager suite for example) that vSAN couldn’t match. Well, it can now, and since its done at the SW layer, its array independent and you can replicate or archive that anywhere, even on cloud and this would be more than sufficient for lots of customers with a smaller or a point use case to not bother buying backup licenses elsewhere to protect that vSphere workload. This is likely going to be popular. I will be testing this out in our lab as soon as the beta code is available to ensure the snaps don’t have a performance penalty.

 

vSAN on VMware Cloud on AWS Announced

Well, this is not massively new but vSAN is a key part of VMware Cloud on AWS and the vSAN storage layer provide all the on premise vSAN goodness while also providing DR to VMware Cloud capability (using snap replication) and orchestration via SRM.

 

vSAN Storage Platform for Containers Announced

Similar to the NSX-T annoucement with K8 (Kubernetes) support, vSAN also provide persistent storage presentation to both K8 as well as Docker container instances in order to run stateful containers.

 
This capability came from the vmware OpenSource project code named project Hatchway and its freely available via GitHub https://vmware.github.io/hatchway/ now.

  • My thoughts: I really like this one and the approach VMware are taking with the product set to be more and more microservices (container based application) friendly. This capability came from an opensource VMware project called Project hatchway and will likely be popular with many. This code was supposed to be available on GitHub as this is an opensource project but I have not been able to see anything within the VMware repo’s on GitHub yet.

 

So, all in all, not very many large or significant announcements for vSAN from VMworld 2017 Vegas (yet), but this is to be expected as the latest version of vSAN 6.6.1 was only recently released with a ton of updates. The key take aways for me is that the popularity of vSAN is obviously growing (well I knew this already anyways) and the current and future announcements are going to be making vSAN a fully fledged SAN / NAS replacement for vSphere storage with more and more native security, efficiency and availability services which is great for the customers.

Cheers

Chan