Apple WWDC 2017 – Artificial Intelligence, Virtual Reality & Mixed Reality

Introduction

As a technologist, I like to stay close to key new developments & trends in the world of digital technology to understand how these can help users address common day to day problems more efficiently. Digital disruption and technologies behind that such as Artificial Intelligence (AI), IoTVirtual Reality (VR), Augmented Reality (AR) & Mixed Reality (MR) are hot topics as they have the potential to significantly reshape how consumers will consume products and services going forward. I am a keen follower on these disruptive technologies because the potential impact they can have on traditional businesses in an increasingly digital, connected world is huge in my view.

Something I’ve heard today coming out of Apple, the largest tech vendor on the planet about how they intend on using various AI technologies along with VR and AR technologies in their next product upgrades across the iPhone, iPad, Apple Watch, App store, Mac…etc made me want to summarise those announcements and add my thoughts on how Apple will potentially lead the way to mass adoption of such digital technologies by many organisations of tomorrow.

Apple’s WWDC 2017 announcements

I’ve been an Apple fan since the first iPhone launch as they have been the prime example when it comes to tech vendors who utilizes cutting edge IT technologies to provide an elegant solution to address day to day requirements in a simple and effective manner that providers a rich user experience. I practically live on my iPhone every day for work and non-work related activities and also appreciate their other ecosystem products such as the MacBook, Apple watch, Apple TV and the iPad. This is typically not because they are technologically so advanced, but simply because they provide a simple, seamless user experience when it comes to using them to increase my productivity during day to day activities.

So naturally I was keen on finding out about the latest announcements that came out of Apple’s latest World Wide Developer Conference event that was held earlier today in San Jose. Having listened to the event and the announcements, I was excited by the new product and software upgrades announced but more than that, I was super excited about couple of related technology integrations Apple are coming out with which include a mix of AI, VR & AR to provide an even better user experience by integrating these technology advancements in to their product offerings.

Now before I go any further, I want to highlight this is NOT a summary of their new product announcements. What interested me out of these announcements were not so much the new apple products, but mainly how Apple, as a pioneer in using cutting edge technologies to create positive user experiences like no other technology vendor on the planet, are going to be using these potentially revolutionary digital technologies to provide a hugely positive user experience. This is relevant to every single business out there that manufacture a product, provides a service or solution offering to their customers as anyone can potentially look to incorporate the same capabilities in a similar or even a more creative and an innovative manner than Apple to provide a positive user experience in a similar fashion.

Use of Artificial Intelligence

Today Apple announced the increased use of various AI technologies everywhere within the future apple products as summarised below

  • Increased use of Artificial Intelligence technologies by the personal assistant “Siri”, to provide a more positive & a more personalised user experience
    • In the upcoming version of the Watch OS 4 for Apple Watch, AI technologies such as Machine Learning is going to be used to power the new Siri face such that Siri can now provide you with dynamic updates that are specifically relevant to you and what you do (context awareness)
    • The new iOS 11 will include a new voice for Siri, which now uses Deep Learning Technologies (AI) behind the scene to offer a more natural and expressive voice that sounds less machine and more human.
    • Siri will also use Machine Learning on each device (“On device learning”) to understand specifically what’s more relevant to you based on what you do on your device so that more personalised interactions can be made by Siri – In other words, Siri is becoming more context aware thanks to Machine Learning to provide a truly personal assistant service unique to each user including predictive tips based on what you are likely to want to do / use next.
    • Siri will use Machine Learning to automatically memorise new words from the content you read (i.e. News) so these words are now included on the dictionary & predictive texts automatically if you want to type them

 

  • Use of Machine Learning in iOS 11 within the photo app to enable various new capabilities to make life easier with your photos
    • Next version of Apple Mac OS, code named High Sierra, will supports additional features on the photo app including advanced face recognition capabilities which utilises AI technologies such as Advanced convolution Neural networks in order to let you group / filter your photos actually based on who’s on them
    • Machine learning capabilities will also be used to automatically understand the context of each photo based on the content of the photo to identify photos from events such as sporting events, weddings…etc and automatically group them / create events / memories
    • Using computer vision capabilities to create seamless loops on live photos
    • Use of Machine Learning to activate palm rejection on the iPad during writing using the apple Pen
    • Most Machine Learning capabilities are now available for 3rd party programmers via the iOS API’s such as Vision API (enables iOS app developers harness machine learning for face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, image registration), Natural Language API (provides language identification, tokenization, lemmatisation, part of speech, named entity recognition)
    • Introduction of Machine Learning Model Converter, 3rd party ML contents can be converted to native iOS 11 Core ML functions.

 

  • Use of Machine Learning to improve graphics on iOS 11
    • Another Mac OS high sierra updates will include Metal 2 (the Apple API that provides app developers near direct access to the GPU capabilities) that will now integrate Machine Learning to graphic processing to provide advanced graphical capabilities such as Metal performance shaders, Recurrent neural network kernels, binary convolution, dilated convolution, L-2 norm pooling, Dilated pooling etc. (https://developer.apple.com/metal/)
    • Newly announced Mac Pro graphics powered by AMD Radeon Vega can provide up to 22 teraflops of half precision compute power which is specifically relevant for machine learning related content development

 

Use of Virtual Reality & Augmented Reality

  • Announcement on the introduction of Metal API for Virtual Reality to be used by developers – That includes Virtual Reality integration to Mac OS High Sierra Metal2 API to enable features such as VR-optimised display pipeline for video editing using VR and other related updates such as viewport arrays, system trace stereo timelines, GPU queue priorities, Frame debugger stereoscopic visualisation.
  • Availability of the ARKit for iOS 11 to create Augmented Reality straight out of iPhone using its camera and built in Machine Learning to identify contents on the live video, real time.

 

Use of IoT capabilities

  • Apple Watch integration for bi directional information synchronisation between Apple Watch and ordinary gym equipment so that your apple watch will now act as an IoT gateway to your typical gym equipment’s like the Treadmill or the cross trainer to provide more accurate measurements from the apple watch and the gym equipment will adjust the workouts based on those readings.
  • Apple watch OS 4 will also provide core Bluetooth connectivity to other devices such as various healthcare tools that open the connectivity of those devices through the Apple Watch

 

My thoughts

The use cases for digital technologies, such as AI, AR & VR in a typical corporate or an enterprise environment to create a better product / service / solution offering like Apple has used them, is immense and is often only limited by one’s level of creativity & imaginations. Many organisations around the world, from other tech or product vendors to Independent Software Vendors to an ordinary organisation like a high street shop or a supermarket can all benefit from the creative application of new digital technologies such as Artificial Intelligence, Augmented Reality and Internet Of Things in to their product / service / solution offerings to provide their customers with richer user experience as well as exciting new solutions. Topics like AI and AR are hot topics in the industry and some organisations are already evaluating the use of them while some already benefit from some of these technologies made easily accessible to the enterprise through platforms such as public cloud (Microsoft Cortana Analytics and Azure Machine Learning capabilities available on Microsoft Azure for example) platforms. But there are also a large number of organisations who are not yet fully investigating how these technologies can potentially make their business more innovative, differentiated or at the very least, more efficient.

If you belong to the latter group, I would highly encourage you to start thinking about how these technologies can be adopted by your business creatively. This applies to any organisation of any size in the increasingly digitally connected world of today. If you have a trusted partner for IT, I’d encourage you to talk to them about the same as the chances are that they will have more collective experience in helping similar businesses adopt such technologies which is more beneficial than trying to get their on your own, especially if you are new to it all.

Digital disruption is here to stay and Apple have just shown how advanced technologies that come out of digital disruption can be used to create better products / solutions for average customers. Pretty soon, AI / AR / IoT backed capabilities will become the norm rather than the exception and how would your business compete if you are not adequately prepared to embrace them?

Keen to get your thoughts?

 

You can watch the recorded version of the Apple WWDC 2017 event here.

Image credit goes to #Apple

 

#Apple #WWDC #2017 #AI #DeepLearning #MachineLearning #ComputerVision #IoT #AR #AugmentedReality #VR #VirtualReality #DigitalDisruption #Azure #Microsoft

 

Impact from Public Cloud on the storage industry – An insight from SNIA at #SFD12

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective (You can read all about here).  While this was great, just before that session, we were treated to another similarly interesting session by SNIA – The Storage Networking Industry Association and I wanted to brief everyone on what I learnt from them during that session which I thought was very relevant to everyone who has a vested interest in field of IT today.

The presenters were Michael Oros, Executive Director at SNIA along with Mark Carlson who co-chairs the SNIA technical council.

Introduction to SNIA

SNIA is a non-profit organisation that was formed 20 years ago to deal with inter-operability challenges of network storage by various different tech vendors. Today there are over 160 active member organisations (tech vendors) who work together behind closed doors to set standards and improve inter-operability of their often competing tech solutions out in the real world. The alphabetical list of all SNIA members are available here and the list include key network and storage vendors such as Cisco, Broadcom, Brocade, Dell, Hitachi, HPe, IBM, Intel, Microsoft, NetApp, Samsung & VMware. Effectively, anyone using any and most of the enterprise datacenter technologies have likely benefited from SNIA defined industry standards and inter-operability

Some of the existing storage related initiatives SNIA are working on include the followings.

 

 

Hyperscaler (Public Cloud) Storage Platforms

According to SNIA, Public cloud platforms, AKA Hyperscalers such as AWS, Azure, Facebook, Google, Alibaba…etc are now starting to make an impact on how disk drives are being designed and manufactured, given their large consumption of storage drives and therefore the vast buying power. In order to understand the impact of this on the rest of the storage industry, lets clarify few key basic points first on these Hyperscaler cloud platforms first (for those didn’t know)

  • Public Cloud providers DO NOT buy enterprise hardware components like the average enterprise customer
    • They DO NOT buy enterprise storage systems (Sales people please read “no EMC, no NetApp, No HPe 3par…etc.”)
    • They DO NOT buy enterprise networking gear (Sales people please read “no Cisco switches, no Brocade switches, HPe switches…etc”.)
    • They DO NOT buy enterprise servers from server manufacturers (Sales people please read “no HPe/Dell/Cisco UCS servers…etc.)
  • They build most things in-house
    • Often this include servers, network switches…etc
  • They DO bulk buy disk drives direct from the disk manufacturers & uses home grown Software Defined Storage techniques to provision that storage.

Now if you think about it, large enterprise storage vendors like Dell and NetApp who normally bulk buy disk drives from manufacturers such as Samsung, Hitachi, Seagate…etc would have had a certain level of influence over how their drives are made given the economies of scale (bulk purchasing power) they had. However now, Public cloud providers who also bulk buy, often quantities far bigger than those said storage vendors would have, also become hugely influential over how these drives are made, to the level that their influence is exceeding that of those legacy storage vendors. This influence is growing such that they (Public Cloud providers) are now having a direct input towards the initial design of the said components (i.e disk drives…etc.) and how they are manufactured, simply due to the enormous bulk purchasing power as well as the ability they have to test drive performance at a scale that was not even possible by the drive manufacturers before,  given their global data center footprint.

 

Expanding on the focus these guys have on Software Defined storage technologies to aggregate all these disparate disk drives found in their servers in the data center is inevitably leading to various architectural changes in how the disk drives are required to be made going forward. For example, most legacy enterprise storage arrays would rely on the old RAID technology to rebuild data during drive failures and there are various background tasks implemented in the disk drive firmware such as ECC & IO re-try operations during failures which adds to the overall latency of the drive. However with modern SDS technologies (in use within Public Cloud platforms as well as some new enterprise SDS vendors tech), there are multiple copies of data held on multiple drives automatically as a part of the Software Defined Architecture (i.e. Erasure Coding) which means those specific background tasks on disk drives such as ECC, and re-try mechanism’s are no longer required.

For example, SNIA highlighted Eric Brewer, the VP of infrastructure of Google who talked about the key metrics for a modern disk drive to be,

  • IOPS
  • Capacity
  • Lower tail latency (long tail of latencies on a drive, arguably caused due to various background tasks, typically causes a 2-10x slower response time from a disk in a RAID group which causes a disk & SSD based RAID stripes to experience at least a single slow drive 1.5%-2.2% of the time)
  • Security
  • Lower TCO

So in a nutshell, Public cloud platform providers are now mandating various storage standards that disk drive manufacturers have to factor in to their drive design such that the drives are engineered from ground up to work with Software Defined architecture in use at these cloud provider platforms.  What this means most native disk firmware operations are now made redundant and instead the drive manufacturer provides an API’s through which cloud platform provider’s own software logic will control those background operations themselves based on their software defined storage architecture.

Some of the key results of this approach includes following architectural designs for Public Cloud storage drives,

  • Higher layer software handles data availability and is resilient to component failure so the drive firmware itself doesn’t have to.
    • Reduces latency
  • Custom Data Center monitoring (telemetry), and management (configuration) software monitors the hardware and software health of the storage infrastructure so the drive firmware doesn’t have to
    • The Data Center monitoring software may detect these slow drives and mark them as failed (ref Microsoft Azure) to eliminate the latency issue.
    • The Software Defined Storage then automatically finds new places to replicate the data and protection information that was on that drive
  • Primary model has been Direct Attached Storage (DAS) with CPU (memory, I/O) sized to the servicing needs of however many drives of what type can fit in a rack’s tray (or two) – See the OCP Honey Badger
  • With the advent of higher speed interfaces (PCI NVMe) SSDs are moving off of the motherboard onto an extended PCIe bus shared with multiple hosts and JBOF enclosure trays – See the OCP Lightning proposal
  • Remove the drives ability to schedule advanced background operations such as Garbage collection, Scrubbing, Remapping, Cache flushes, continuous self tests…etc on its own and allow the host to affect the scheduling of these latency increasing drive maintenance operations when it sees fit – effectively remove the drive control plane and move it up to the control of the Public Cloud platform (SAS = SBC-4 background Operation Control, SATA = ACS-4 advanced background operaitons feature set, NVMe = Available through NVMe sets)
    • Reduces unpredictable latency fluctuations & tail latency

The result of all these means Public Cloud platform providers such as Microsoft, Google, Amazon are now also involved at setting industry standards through organisations such as SNIA, a task previously only done by hardware manufacturers. An example is the DePop standard which is now approved at T13 which essentially defines a standard where the storage host will shrink the usable size of the drive by removing the poor performing (slow) physical elements such as drive sectors from the LBA address space rather than disk firmware. The most interesting part is that the drive manufacturers are now required to replace the drives when enough usable space has shrunk to match the capacity of a full drive, without necessarily having the old drive back (i.e. Public cloud providers only pay for usable capacity and any unusable capacity is replenished with new drives) which is a totally different operational and a commercial model to that of legacy storage vendors who consume drives from drive manufacturers.

Another concept that’s pioneered by the Public cloud providers is called Streams which maps lower level drive blocks with an upper level object such as a file that reside on it, in a way that all the blocks making the file object are stored contiguously. This simplifies the effect of a TRIM or a SCSI UNMAP command (executed when the file is deleted from the file system) which reduces delete penalty and causes lowest amount of damage to SSD drives, extending their durability.

 

Future proposals from Public Cloud platforms

SNIA also mentioned about future focus areas from these public cloud providers such as,

  • Hyperscalers (Google, Microsoft Azure, Facebook, Amazon) are trying to get SSD vendors to expose more information about internal organization of the drives
    • Goal to have 200 µs Read response and 99.9999% guarantee for NAND devices
  • I/O Determinism means the host can control order of writes and reads to the device to get predictable response times –
    • Window of reading – deterministic responses
    • Window of writing and background – non-deterministic responses
  • The birth of ODM – Original Design Manufacturers
    • There is a new category of storage vendors called Original Design Manufacturer (ODM) direct who package up best in class commodity storage devices into racks according to the customer specifications and who operate at much lower margins.
    • They may leverage hardware/software designs from the Open Compute Project (OCP) or a similar effort in China called Scorpio, now under an organization called the Open Data Center Committee (ODCC), as well from as other available hardware/software designs.
    • SNIA also mentioned about few examples of some large  global enterprise organisations such as a large bank taking the approach of using ODM’s to build a custom storage platform achieving over 50% cost savings over using traditional enterprise storage

 

My Thoughts

All of these Public Cloud platform introduced changes are set to collectively change the rest of the storage industry too and how they fundamentally operate which I believe would be good for the end customers. Public cloud providers are often software vendors who approaches every solution with a software centric solution and typically, would have highly cost efficient architecture of using cheapest commodity hardware with underpinned by intelligent software. This will likely re-shape the legacy storage industry too and we are already starting to see the early signs of this today through the sudden growth of enterprise focused Software Defined Storage vendors and legacy storage vendors struggling with their storage revenue. All public cloud computing and storage platforms are a continuous evolution for the cost efficiency and each of their innovation in how storage is designed, built & consumed will trickle down to the enterprise data centers in some shape or form to increase overall efficiencies which surely is only a good thing, at least in my view. And smart enterprise storage vendors that are software focused, will take note of such trends and adopt accordingly (i.e. SNIA mentioned that NetApp for example, implemented the Stream commands on the front end of their arrays to increase the life of the SSD media), where as legacy storage / hardware vendors who are effectively still hugging their tin, will likely find the future survival more than challenging.

Also, the concept of ODM’s really interest me and I can see the use of ODM’s increasing further as more and more customers will wake up to the fact that they have been overpaying for their storage for the past 10-20 years in the data center due to the historically exclusive capabilities within the storage industry. With more of a focus on a Software Defined approach, there are large cost savings to be had potentially through following the ODM approach, especially if you are an organisation of that size that would benefit from the substantial cost savings.

I would be glad to get your thoughts, through comments below

 

If you are interested in the full SNIA session, a recording of the video stream us available here and I’d recommend you watch it, especially if you are in the storage industry.

 

Thanks

Chan

P.S. Slide credit goes to SNIA and TFD

VMware & DataGravity Solution – Data Management For the Digital Enterprise

 

 

Yesterday, I had the priviledge to be invited to an exclusive VMware #vExpert only webinar oraganised by the vExpert community manager, Corey Romero and DataGravity, one of their partner ISV’s to get a closer look at the DataGravity solution and its integration with VMware.  My initial impression was that its a good solution and a good match with VMware technology too and I kinda like what I saw. So decided to post a quick post about it to share what I’ve learned.

DataGravity Introduction

DataGravity (DG from now on) solution appear to be all about data managament, and in perticular its about data management in a virtualised data center. In a nutshell, DG is all about providing a simple, virtualisation friendly data management solution that, amongst many other things, focuses on the following key requirements which are of primary importance to me.

  • Data awareness – Understand different types of data available within VMs, structured or unstructured along with various metadata about all data. It automatically keeps a track of data locations, status changes and various other metadata information about data including any sensitive contents (i.e. Credit card information) in the form of an easy to read, dashboard style interface
  • Data protection & security –  DG tracks sensitive data and provide a complete audit trail including access history helpo remediate any potential loss or compromise of data

DG solution is currently specific to VMware vSphere virtual datacenter platforms only and serves 4 key use cases as shown below

Talking about the data visulation itself, DG claim to provide a 360 degree view of all the data that reside within your virtualised datacenter (on VMware vSphere) and having see the UI on the live demo, I like that visualisation of it which very much resemblbes the interface of VMware’s own vrealise operations screen.

The unified, tile based view of all the data in your datacenter with vROPS like context aware UI makes navigating through the information about data pretty self explanatory.

Some of the information that DG automatically tracks on all the data that reside on the VMware datacenter include information as shown below

Some of the cool capabilities DG has when it comes to data protection itself include behaviour based data protection where it proactively monitor user and file activities and mitigates potential attacks through sensing anomolous behaviours and taking prevenetive measures such as orchestratin protection points, alerting administrators to even blocking user access automatically.

During a recovery scenario, DG claims to assemble the forensic information needed to perform a quick recovery such as cataloging files and incremental version information, user activity information and other key important meta data such as known good state of various files which enable the recovery with few clicks.

Some Details

During the presentaiton, Dave Stevens (Technical Evangelist) took all the vExperts through the DG solution in some detail and its integration with VMware vSphere which I intend to share below for the benefit of all others (sales people: feel free to skip this section and read the next).

The whole DG solution is deployed as a simple OVA in to vCenter and typically requires connecting the appliance to Microsoft Active Directory (for user access tracking) initially as a one off task. It will then perform an automated initial discovery of data and the important thing to note here is that it DOES NOT use an agent in each VM but simply uses the VMware VADP, or now known as vSphere Storage API to silently interrogate data that live inside the VMs in the data center with minimal overhead. Some of the specifics around the overhead around this are as follows

  • File indexing is done at a DiscoveryPoint (Snapshot) either on a schedule or user driven. (No impact to real-time there access from a performance point of view).
  • Real time access tracking overhead is minimal to non existent
    • Real-time user activity is 200k of memory
    • Network bandwidth about 50kbps per VM.
    • Less than 1% of CPU

From an integration point of view, while DG solution integrates with vSphere VM’s as above irrespective of the underlying storage platform, it also has the ability to integrate with specific storage vendors too (licensing prerequisites apply)

Once the data discovery is complete, further discoveries are done on an incremental basis and the management UI is a simple web interface which looks pretty neat.

Similar to VMware vROPS UI for example, the whole UI is context aware so depending on what object you select, you are presented with stats in the context of the selected object(s).

The usage tracking is quite granular and keeps a track of all types of user access for data in the inventory which is handy.


 

Searching for files is simple and you can also use tags to search using, which are simple binary expressions. Tags can be grouped together in to profiles too to search against which looks pretty simple and efficient.

I know I’ve mentioned this already but the simple, intuitive user interface makes consuming the information on the UI about all your data in  singple pane of glass manner looks very attractive.

Current Limitations

There are some current limitations to be aware of however and some of the important ones include,

  • Currently it doesn’t look inside structured data files (i.e. Database files for example)
    • Covers about 600 various file types
  • File content analytics is available for Windows VMs only at present
    • Linux may follow soon?
  • VMC (VMware Cloud on AWS) & VCF (Vmware Cloud Foundation) support is not there (yet)
    • Is this to be annouced during a potential big event?
  • No current availability on other public cloud platforms such as AWS or Azure (yet)

 

My Thoughts

I lilke the solution and its capabilities due to various reasons. Primarily its because the focus on data that reside in your data center is more important now that its ever been. Most organisaitons simply do not have a clue of the type of th data they hold in a datacenter, typically scattered around various server, systrems, applications etc, often duplicated and most importantly left untracked on their current relevence or even the actual usage as to who access what. Often, most data that is generated by an organisation serves its initial purpose after a certain intial period and that data is now simply just kept on the systems forever, intentionally or unintentionally. This is a costly exercise, especially on the storage front and you are typically filling your SAN storage with stale data. With a simple, yet intelligent data management solution like DG, you now have the ability to automatically track data and their ageing across the datacenter and use that awareness of your data to potentially move stale data on to a different tier, especially a cheaper tier such as a public cloud storage platform.

Furthermore, not having an understanding of data governance, especifically not monitoring the data access across the datacenter is another issue where many organisations do not collectively know what type of data is available where within the datacenter and how secure that data is including their access / usage history over their existence. Data security is probably the most important topic in the industry today as organisations are in creasingly becoming digital thanks to the Digital revelution / Digital Enterprise phoenomena (in other words, every organisation is now becoming digital) and a guranteed by product of this is more and more DATA being generated which often include all if not most of an organisations intelectual property. If theres no credible way of providing a data management solution focusing around security for such data, you are risking loosing the livelyhood of your organisation and its potential survival in a fiercely coimpetitive global economy.

It is important to note that some regulatory compliance has always enforced the use of data management & governance solutions such as DG tracking such information about data and their security for certain type of data platforms (i.e.  PCI for credit card information…etc). But the issue is no such requirement existed for all types of data that lives in your datacenter. This about to change, at least here in the Europe now thanks to the European GDPR (General Data Protection Regulation) which now legally oblighes every orgnisation to be able to provide auditeble history of all types of data that they hold and most organisations I know do not have a credible solution covering the whole datacenter to meet such demands rearding their data today.

A simple, easily integrateble solution that uses little overhead like DataGravity that, for the most part harness the capabilities of the underlying infrastructure to track and manage the data that lives on it could be extremely attractive to many customers. Most customers out there today use VMware vSphere as their preferred virtualisaiton platform and the obvious integration with vSphere will likely work in favour of DG. I have already signed up for a NFR download for me to have doiwnload and deploy this software in my own lab to understand in detail how things work in detail and I will aim to publish a detailed deepdive post on that soon. But in the meantime, I’d encourage anyone that runs a VMware vSphere based datacenter that is concerned about data management & security to check the DG solution out!!

Keen to get your thoughts if you are already using this in your organisation?

 

Cheers

Chan

Slide credit to VMware & DataGravity!

Storage Futures With Intel Software From #SFD12

 

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective. This was a great session that was presented by Jonathan Stern (Intel Solutions Architect /  and Tony Luck (Principle Engineer) and this post is to summarise few things I’ve learnt during those sessions that I thought were quite interesting for everyone. (prior to this session we also had a session from SNIA that was talking about future of storage industry standards but I think that deserves a dedicated post so I won’t mention those here – stay tuned for a SNIA event specific post soon!)

First session from Intel was on the future of storage by Jonathan. It’s probably fair to say Jonathan was by far the most engaging presenter out of all the SFD12 presenters and he covered somewhat of a deep dive on the Intel plans for storage, specifically on the software side of things and the main focus was around the Intel Storage Performance Development Kit (SPDK) which Intel seem to think is going to be a key part of the future of storage efficiency enhancements.

The second session with Tony was about Intel Resource Director Technology (addresses shared resource contention that happens inside an Intel processor in processor cache) which, in all honesty was not something most of us storage or infrastructure guys need to know in detail. So my post below is more focused on Jonathan’s session only.

Future Of Storage

As far as Intel is concerned, there are 3 key areas when it comes to the future of storage that need to be looked at carefully.

  • Hyper-Scale Cloud
  • Hyper-Convergence
  • Non-Volatile memory

To put this in to some context, see the below revenue projections from Wikibon Server SAN research project 2015 comparing the revenue projections for

  1. Traditional Enterprise storage such as SAN, NAS, DAS (Read “EMC, Dell, NetApp, HPe”)
  2. Enterprise server SAN storage (Read “Software Defined Storage OR Hyper-Converged with commodity hardware “)
  3. Hyperscale server SAN (Read “Public cloud”)

It is a known fact within the storage industry that public cloud storage platforms underpinned by cheap, commodity hardware and intelligent software provide users with an easy to consume, easily available and most importantly non-CAPEX storage platform that most legacy storage vendors find hard to compete with. As such, the net new growth in the global storage revenue as a whole from around 2012  has been predominantly within the public cloud (Hyperscaler) space while the rest of the storage market (non-public cloud enterprise storage) as a whole has somewhat stagnated.

This somewhat stagnated market was traditionally dominated by a few storage stalwarts such as EMC, NetApp, Dell, HPe…etc. However the rise of the server based SAN solutions where commodity servers with local drives combined with intelligent software to make a virtual SAN / storage pool (SDS/HCI technologies) has made matters worse for these legacy storage vendors and such storage solutions are projected to eat further in to the traditional enterprise storage landscape within next 4 years. This is already evident by the recent popularity & growth of such SDS/HCI solutions such as VMware VSAN, Nutanix, Scality, HedVig while at the same time, traditional storage vendors announcing reducing storage revenue. So much so that even some of the legacy enterprise storage vendors like EMC & HPe have come up with their own SDS / HCI offerings (EMC Vipr, HPe StoreVirtual, annoucement around SolidFire based HCI solution…etc.) or partnered up with SDS/HCI vendors (EMC VxRail, VxRail…etc.) to hedge their bets against a loosing back drop of traditional enterprise storage.

 

If you study the forecast in to the future, around 2020-2022, it is estimated that the traditional enterprise storage market revenue & market share will be even further squeezed by even more rapid  growth of the server based SAN solutions such as SDS and HCI solutions. (Good luck to legacy storage folks)

An estimate from EMC suggest that by 2020, all primary storage for production applications would sit on flash based drives, which precisely co-inside with the timelines in the above forecast where the growth of Enterprise server SAN storage is set to accelerate between 2019-2022. According to Intel, one of the main reasons behind this forecasted increase of revenue (growth) on the enterprise server SAN solutions is estimated to be the developments of Non-Volatile Memory (NVMe) based technologies which makes it possible achieve very  low latency through direct attached (read “locally attach”) NVMe drives along with clever & efficient software that are designed to harness this low latency. In other words, drop of latency when it comes to drive access will make Enterprise server SAN solutions more appealing to customers who will look at Software Defined, Hyper-Converged storage solutions in favour of external, array based storage solutions in to the immediate future and legacy storage market will continue to shrink further and further.

I can relate to this prediction somewhat as I work for a channel partner of most of these legacy storage vendors and I too have seen first hand the drop of legacy storage revenue from our own customers which reasonably backs this theory.

 

Challenges?

With the increasing push for Hyper-Convergence with data locality, the latency becomes an important consideration. As such, Intel’s (& the rest of the storage industry’s) main focus going in to the future is primarily around reducing the latency penalty applicable during a storage IO cycle, as much as possible. The imminent release of this next gen storage media from Intel as a better alternative to NAND (which comes with inherent challenges such as tail latency issues which are difficult to get around) was mentioned without any specific details. I’m sure that was a reference to the Intel 3D XPoint drives (Only just this week announced officially by Intel http://www.intel.com/content/www/us/en/solid-state-drives/optane-solid-state-drives-dc-p4800x-series.html) and based on the published stats, the projected drive latencies are in the region of < 10μs (sequential IO) and < 200μs (random IO) which is super impressive compared to today’s ordinary NVMe SSD drives that are NAND based. This however presents a concern as the current storage software stack that process the IO through the CPU via costly context switching also need to be optimised in order to truly benefit from this massive drop in drive latency. In other words, the level of dependency on the CPU for IO processing need to be removed or minimised through clever software optimisation (CPU has long been the main IO bottleneck due to how MSI-X interrupts are handled by the CPU during IO operations for example). Without this, the software induced latency would be much higher than the drive media latency during an IO processing cycle which will contribute to an overall higher latency still. (My friend & fellow #SFD12 delegate Glenn Dekhayser described this in his blog as “the media we’re working with now has become so responsive and performant that the storage doesn’t want to wait for the CPU anymore!” which is very true).

Furthermore,

Storage Performance Development Kit (SPDK)

Some companies such as Excelero are also addressing this CPU dependency of the IO processing software stack by using NVMe drives and clever software  to offload processing from CPU to NVMe drives through technologies such as RDDA (Refer to the post I did on how Excelero is getting around this CPU dependency by reprogramming the MSI-X interrupts to not go to the CPU). SPDK is Intel’s answer to this problem and where as Excelero’s RDDA architecture primarily avoid CPU dependency by bypassing CPU for IOs, Intel SPDK minimizes the impact on CPU & Memory bus cycles during IO processing by using the user-mode for storage applications rather than the kernel mode, thereby removing the need for costly context switching and the related interrupt handling overhead. According to http://www.spdk.io/, “The bedrock of the SPDK is a user space, polled mode, asynchronous, lockless NVMe driver that provides highly parallel access to an SSD from a user space application.”

With SPDK, Intel claims that you can reach up to around 3.6million IOPS per single Xeon CPU core before it ran out of PCI lane bandwidth which is pretty impressive. Below is a IO performance benchmark based on a simple test of CentOS Linux kernel IO performance (Running across 2 x Xeon E5-2965 2.10 GHz CPUs each with 18 cores + 1-8 x Intel P3700 NVMe SSD drives) Vs SPDK with a single dedicated 2.10 GHz core allocated out of the 2 x Xeon E5-2965  for IO. You can clearly see the significantly better IO performance with SPDK, which, despite having just a single core, due to the lack of context switching and the related overhead, is linearly scaling the IO throughput in line with the number of NVMe SSD drives.

(In addition to these testing, Jonathan also mentioned that they’ve done another test with Supermicro off the shelf HW and with SPDK & 2 dedicated cores for IO, they were able to get 5.6 million IOPS before running out of PCI bandwidth which was impressive)

 

SPDK Applications & My Thoughts

SPDK is an end-to-end reference storage architecture & a set of drivers (C libraries & executables) to be used by OEMs and ISV’s when integrating disk hardware. According to Intel’s SPDK introduction page, the goal of the SPDK is to highlight the outstanding efficiency and performance enabled by using Intel’s networking, processing and storage technologies together. SPDK is available freely as an open source product that is available to download through GitHub. It also provide NVMeF (NVMe Over Fabric) and iSCSI servers to be built using the SPDK architecture, on top of the user space drivers that are even capable of servicing disks over the network. Now this can potentially revolutionise how the storage industry build their next generation storage platforms.  Consider for example any SDS or even  a legacy SAN manufacturer using this architecture to optimise the CPU on their next generation All  Flash storage array? (Take NetApp All Flash FAS platform for example, they are known to have a ton of software based data management services available within OnTAP that are currently competing for CPU cycles with IO and often have to scale down data management tasks during heavy IO processing. With Intel DPDK architecture for example, OnTAP can free up more CPU cycles to be used by more data management services and even double up on various other additional services too without any impact on critical disk IO? I mean its all hypothetical of course as I’m just thinking out loud here. Of course it would require NetApp to run OnTAP on Intel CPUs and Intel NVMe drives…etc but it’s doable & makes sense right? I mean imagine the day where you can run “reallocate -p” during peak IO times without grinding the whole SAN to a halt? :-). I’m probably exaggerating its potential here but the point here though is that SDPK driven IO efficiencies can apply same to all storage array manufacturers (especially all flash arrays) where they can potentially start creating some super efficient, ultra low latency, NVMe drive based storage arrays and also include a ton of data management services that would have been previously too taxing on CPU (think inline de dupe, inline compression, inline encryption, everything inline…etc.) that’s on 24×7 by default, not just during off peak times due to zero impact on disk IO?

Another great place to apply SPDK is within virtualisation for VM IO efficiency. Using SPDK with QEMU as follows has resulted in some good IO performance to VM’s

 

I mean imagine for example, a VMware VSAN driver that was built using the Intel DPDK architecture running inside the user space using a dedicated CPU core that will perform all IO and what would be the possible IO performance? VMware currently performs IO virtualisation in kernel right now but imagine if SPDK was used and IO virtualisation for VSAN was changed to SW based, running inside the user-space, would it be worth the performance gain and reduced latency? (I did ask the question and Intel confirmed there are no joint engineering currently taking place on this front between 2 companies). What about other VSA based HCI solutions, especially take someone like Nutanix Acropolis where Nutanix can happily re-write the IO virtualisation to happen within user-space using SPDK for superior IO performance?

Intel & Alibaba cloud case study where the use of SPDK was benchmarked has given the below IOPS and latency improvements

NVMe over Fabric is also supported with SPDK and some use cases were discussed, specifically relating to virtualisation where VM’s tend of move between hosts and a unified NVMe-oF API that talk to local and remote NVMe drives being available now (some part of the SPDK stack becoming available in Q2 FY17)

Using the SPDK seems quite beneficial for existing NAND media based NVMe storage, but most importantly for newer generation non-NAND media to bring the total overall latency down. However that does mean changing the architecture significantly to process IO in user-mode as opposed to kernel-mode which I presume is how almost all storage systems, Software Defined or otherwise work and I am unsure whether changing them to be user-mode with SPDK is going to be a straight forward process. It would be good to see some joint engineering or other storage vendors evaluating the use of SPDK though to see if the said latency & IO improvements are realistic in complex storage solution systems.

I like the fact that Intel has made the SPDK OpenSource to encourage others to freely utilise (& contribute back to) the framework too but I guess what I’m not sure about is whether its tied to Intel NVMe drives & Intel processors.

If anyone wants to watch the recorded video of our session from # SFD12 the links are as follows

  1. Jonathan’s session on SPDK
  2. Tony’s session on RDT

Cheers

Chan

#SFD12 #TechFieldDay @IntelStorage

Excelero – The Latest Software Defined Storage Startup

 

As a part of the recently concluded Storage Field Day 12 (#SFD12), I had the privilege to sit in front of the engineering & the CxO team of the silicon valley’s newest (and should I say one of hottest) storage start up, Excelero, to find out more about their solution offering on the launch day itself of the company. This post is to summarise what I’ve learnt, and my honest thoughts on what I saw and heard about their solution.

Apologies about the length of the post – Excelero has some really cool tech and I wanted to provide my thoughts in detail, in a proper context! 🙂

Enterprise Storage Market

So lets start with a bit of context first. If you look at the total storage market as a whole, it has been growing due to the vast and vast amounts of the data being generated from everything we do (consumer as well as enterprise activities). This growth in storage requirements I believe will likely accelerate even faster in the future as we are going to be generating even more data and most of such data are likely going to end up on megascaler’s cloud storage platforms (public cloud). Due to this impact from Public cloud platforms such as AWS, Azure, Google cloud…etc. that suck up most of those storage requirements, the traditional enterprise storage market (where customers typically used to own their own storage) has been very competitive and is perceived to be going through a bit of a downward trend. This has prompted a number of consolidation activities across the storage tech industry and the obvious elephant in the room is the Dell acquisition of EMC  while similar other events include HPe’s recent acquisition of Nimble storage and Dell killing off its DSSD array plans etc. So, specially in this supposedly dwindling enterprise storage market (non cloud), the continuous innovation is critical for these enterprise storage vendors in order to compete with Public cloud storage platforms and demand a larger portion of this dwindling, enterprise storage pie. This constant innovation, often software & hardware lead rather than just hardware lead, gives them the ability to provide their customers with faster, larger storage solutions and most importantly a differentiated storage solution offering, together with added data management technologies to meet various 21st century business requirements.

Now, as someone that work in an organisation that partners with almost all of these storage tech companies (legacy and start-ups), I know fairly well that almost every single one of the storage tech vendors are prioritising the use of NVMe flash drives (in its various form factors) as a key focus area for innovation (in most cases, NVMe in itself is their only option without any surrounding innovation which is poor). This was also evident during the #SDF12 event I attended as almost all the storage vendors that presented to us touted NVMe based flashed drives as their key future road map items. Even SNIA – The body for defining standards in the Storage and Networking industry themselves included a number of new NVMe focused standards as being in their immediate and future focus areas.  (separate blog post to cover SNIA presentation content from #SDF12 – Stay tuned!).

Talking about NVMe in particular, the forecast on the NVMe technology roadmap going in to the future looks somewhat like below (image credit to www.nvmexpress.org) where the future is all focused around NVMe over fabric technologies that can integrate multiple NVMe drives across various different hosts via a high bandwidth NVMe fabric (read “low latency, high bandwidth network”).

 

Introduction to Excelero

-Company Overview-


Excelero is a brand new, Israel tech startup, that was founded in 2014 in Tel Aviv that has a base in Silicon valley, that just came out of stealth on the 8th of March 2017. Despite only just coming out of stealth, as you can see they have some impressive list of high end customers on the list already.

Their offering is primarily a SDS solution  with some real innovation in the form of an efficient (& patented) software stack, engineered to exploit latest development in the storage hardware technologies such as NVMe drives, NVMe over Fabric (NVMesh) and RDMA technology all working in harmony to boost NVMesh performance, unlike any other storage vendor that I know of (At this point in time). They have built a patented software stack focused around RDDA (Remote Direct Data Access which is similar to RDMA in how it operates – more details below) which by passes CPU (& memory to a level) on the storage controllers / nodes / servers when it comes to storage IO processing, such that it requires zero CPU power on the storage node to serve IO. If you are a storage person, you’d know the biggest problem is / has been in the storage industry for scaling performance up is the CPU power on the storage nodes / controllers, especially when you use NAND technologies such as SSD and this is why if you look at every All Flash Array, you’ll see a ton of CPU cores operating at a very high frequency in every controller node. Excelero is conveniently getting around this issue by de-coupling storage management & data and most importantly, using RDDA technology to bypass storage server CPU for disk IO (which is now offloaded to a dedicated RDMA capable network card (RoCE / RNIC) such as Mellanox Connectrix-3/4/5 card).

In a nutshell (and this is for the sales people that reads this) – Excelero is a scale out, innovative, Software Defined Storage solution that is unlike any other in the market right now. They use next generation storage & networking hardware technologies to provide a low cost, extremely  low latency, extremely high bandwidth storage solution for specific enterprise use cases that demands such a solutions. Excelero claims that they can produce 100 million IOPS along with some obscene bandwidth throughputs with little to no CPU capacity on the server side (storage nodes) and I believe them, especially after having seen a scaled down demo in action. I would say they are probably one of the best if not the best solution of its kind available in the market right now and that is my honest view. Please read on the understand why!

-Use Cases-

Typical use cases Excelero initially aims to address are any enterprise business requirements looking for a high bandwidth, ultra low latency, block protocol storage solution. Some of the typical examples are as follows.

                                           

 

Technical Overview

Advanced warning:

  • If you are a SALES GUY, FORGET THIS SECTION!. I’m certain it aint gonna work for you! Honestly, don’t waste your time. Just skip this section and read the next section 🙂
  • If you are a techie however, do carry on.

It is important first of all to understand some key concepts used within the Excelero solution and their architectural overview.

-NVMesh Architecture-

The NVMesh server SAN Architecture that Excelero has built is a key part of the Excelero software stack & given below is a high level overview.

 

NVMesh is the technology using which Excelero aggregates various remote NVMe drives in to a single pool of drives that are accessible by all participating storage nodes over the NVMe fabric network, but most importantly as local drives (with local drive characteristics such as latency)

  • NVMesh design goals:
    • Performance, Scalability, Integration, Flexibility, Efficiency, Ease of use
  • Components involved
    • Control path
      • Centralised management for provisioning, management & all control activities. (All intelligence reside at this layer)
      • Runs as a Node.js application on top of MongoDB
      • Pools drives, provisions volumes and monitor
      • Transforms drives from raw storage into a pool
      • Also includes topology manager which
        • Runs on all the nodes as a distributed service
        • Implements the cluster management and HA
        • Performs volume lifecycle management
        • Uses Multi-RAFT protocols avoid split brain and RAIN data protection across a large node cluster
        • Communicates with the target software that runs on the storage nodes
      • Rest-full API for integration with automation and external orchestration platforms such as Mesos (Kubernetes support is on the road map)
      • Docker support with a persistent storage plugin available
    • Data path
      • Kernel module that manages drives and act as a storage server for clients
      • Provide true convergence which removes target module from the data path (CPU) such that storage nodes can run applications and other data services on the server nodes without CPU conflict / Impact (node that this is a new definition to hyper-converged compared to other HCI vendors such as VMware / Nutanix …etc.)
      • Point to point communication with other storage nodes, management & clients
      • NVMesh client
        • Intelligent client block driver (Linux) where client side storage services are implemented
        • Kernel module that presents logical volumes via the above block driver API
        • No need for iSCSI or FC as the Excelero solution is not using SCSI protocol for communication
      • NVMesh target
    • Hardware components
      • Standard X86 servers with NVMe drives (more details on HW below) & RNIC cards
      • Next generation switches
    • Software components (Excelero Intellectual Property)

Note that the current version of NVMesh (1.1) is supported on Linux only and lacks any specific data services such as de-duplication & compression etc but these services are included in the roadmap. Based on what was disclosed to us, some these future improvements include QoS capabilities, Additional drive media support, additional Hardware architecture support, Non-Linux OS support, Additional deployment methods, reduced power configurations as well as integration with Software Defined Networking solutions which sounds very promising as a total solution as was very good to hear.

-RDDA (Remote Direct Data Access)-

RDDA is the patented secret sauce that Excelero has developed which is the other most important part of their solution stack. It works hand in hand with the NVMesh software stack (described above) and its primary purpose is to avoid the use of CPU on the storage nodes, when processing client IO.

They key points to note about the RDDA technology is

  • Developed a while ago by Excelero to fill a gap that was in the market. (A new replacement for this is coming soon)
  • No CPU utilisation on the target side – This is achieved through the RDDA technology which bypasses the target side CPU
  • RDDA only works with NVMe & is highly optimised for NVMe & remote NVMe access (so if you use non NVMe drives, which is also possible, there are no RDDA capabilities)
  • However if used in converged mode (centralised storage), RDDA is not used so target side CPU will have an impact

Now I’m not an expert of how a typical NVMe read / write would occur and all the typical sub-protocol level steps that are involved. But from what I gathered based on their architect’s description, given below are the high level steps involved in Excelero’s RDDA technology when it comes to local and remote NVMe writes and reads. I’ve included this in order to explain at a high level how this technology differs to others and why the CPU is no longer necessary on the target side.

Pre-requisite knowledge: If you are unfamiliar with certain NVMe operations & techniques such as submission queue, completion queue, doorbell..etc which are typically used in a NVMe I/O operation, refer to this article for a basic understanding of it first before reading the below in order to make better sense. The below image is from that article

Now that you supposedly understand the NVMe commend process, lets have a look at Excelero’s high level implementation of RDDA (based on my understanding – Actual process may slightly differ or have many more steps / validations…etc.)

  • Each client side read or a write will always result in a single RDMA write that is sent from the client side directly in to the destination storage node. There are 3 pieces of this NVMe write that occur on the storage node.
    • Local write (if to a local NVMe disk) or a remote write (if reading or writing to a remote NVMe drive) to the NVMe drive’s submission queue. This include any data to be written if that’s a write operation where write data is put in to the local or remote memory and is referenced within this write in the submission queue.
    • Also writes in to the RNIC’s queue (memory) a message (memory buffer called a “Bounce buffer” which is made to point to the completion queue of the NVMe drive) to be sent back to the client (imagine a sort of a pre-paid envelop). This is effectively a pre-prepared message to be sent back to client (to be used if it’s a read request)
    • Ring the doorbell on the NVMe drive (to start executing) after which the NVMe drive on the storage node will start acting to execute the IO operation. (Note that so far, there were no CPU operations on the storage node as no storage side SW has been used).
      • If it’s a read op, it will then fill the bounce buffer that was pre-prepared in the step 2 above with the data that is read from the NVMe disk.
      • If it’s a write op, it will write the data that was put in to memory (above) to the disk.
  • Once the above NVMe operation is complete, it will generate 2 things
    1. Small completion (less important part)
    2. Most important part is that it then generate a MSI-X interrupt. Typically speaking, MSI-X interrupts are targeted at the CPU which is why during IO cycles, the CPU utilisation on the storage controllers go up. Unlike in a typical MSI-X interrupt’s case, with Excelero RDDA, the MSI-X interrupt is pre-programmed to not go to the CPU but instead, go ring the doorbell of the NVMe NIC. Upon this doorbell, the NVMe NIC will then send the pre-prepared Bounce buffer (step 2 above) which was pointing at the completion queue of the NVMe drive read data if that was a read operation, or if that was a write op, it would send the completion queue details along with some other data back to the client via the NVMe fabric, involving “ZERO” CPU on the storage node. This bit is the Excelero’s patented technology that allows them to NOT use any CPU on the storage server side during any IO operation no matter how big the IOPS / bandwidth is

You’ll see from this high level operation flow that during the whole IO process, no server side CPU was ever required & all the IO processing were carried our by the RNIC’s and the NVMe drives, and the traffic transported across the NVMe fabric (network). Note however that on client side, CPU is used as normal (similar to any other client accessing storage). Also note that there is no concept of caching to memory when it comes to write IO (no nvram) and therefore the acknowledgement is only sent back to the client once the data is written to the NVMe drive.

The current RNICs used within the Excelero solution are Mellanox (to be specific, Mellanox Connectrix-3/4/5) but they’ve mentioned that it could also work on Q-Logic RNIC cards too. Excelero also indicated that they are already working with another large networking vendors for additional RNIC card support, though didn’t mention exactly who that is (Cisco or Broadcom??)

-RAIN Architecture-

Excelero uses a concept of RAIN (similar to RAID) when it comes to provisioning volumes and providing high availability. Key details are as follows.

  • A logical volume is a RAIN data protected volume which consist of one or more partial drives. (i.e. high performance volume may include a number of drives but only a part of each drive’s capacities may be used)
  • The key differentiator here is that as opposed to RAID which is across local disks, RAIN is across disks from multiple host (similar to network RAID)
  • Current version of the solution support RAIN10 (imagine RAID 10 over the NW but this time over NVMe fabric)
  • Erasure coding would be available soon

-Deployment Architecture-

Excelero solution is supported as both Local Storage in Application server mode (similar to Hyper-Converged but without a hypervisor) as well as converged (centralised storage) mode and each method will have certain limitations (such as RDDA not being applicable in a converged deployment). You can supposedly deploy them in mixed mode too though I don’t quite remember who that works.

 

 

Performance

Excelero boasts some obscene performance levels to produce 100 million IOPS with little to no CPU capacity on the storage nodes which totally seems believable. It is important to understand that Excelero doesn’t necessarily promise to deliver more IO / bandwidth than a NVMe disk manufacturer claims possible from a raw NVMe disk, but they ensure that the maximum possible capability can be extracted from a NVMe drive with NAND storage when used with their solution.  In order to do that,  Excelero NVMesh & RDDA technologies combine to present remote NVMe drives as local drives which makes a significant difference to the latency that is possible with no CPU penalty on server side. No other storage vendor can provide such capability as far as I know (Perhaps, except for HPe’s prototype “Machine project” that is supposedly looking at the use of Photonics to reduce the distance between processors and persistent local and remote memory which is used as storage. However this is not a shipping product and its doubtful whether it would ever ship, and if does, its likely going to cost an obscenely high amount, compared to low cost option available with Excelero)

In order to see whats possible, we were treated to a live demo of their platform running on a bunch of Supermicro commodity server hardware with lower scale Intel Xeon CPU’s and a mix of Intel and Samsung NVMe drives interlinked with Mellanox ConnectX-5 100Gbs Ethernet RoCE adaptors and a Dell Z9100-ON network switch.

Its fair to say, the demo results blew our socks off!!! And we did have some storage industry stalwarts in our delegate panel at #SFD12 who’s been there in the storage industry since its enterprise inception and every one of those guys (and myself) were grinning from the left year to the right, when watching this demo and the stats that came out (that’s a “GOOD” thing in case it wasn’t clear :-)).

When 4 x Intel 400GB NVMe drives from 4 different hosts were used in a single storage pool and performed a random read operation with 4K IO’s, the stats we saw were around 4.9 million IOPS @ 25GB/s bandwidth and less than or around 200µs of latency (consistently) – This was somewhat previously unseen!

When the write test was shown (on the same disk pool across 4 servers) with 4K writes, the results were around 2.4 million IOPS @ < 200µs latency.

We were also shown the CPU stats during the IO operations where the CPU utilisation on each storage node was hovering around 0.84% throughout the entire demo to prove the RDDA technology clearly bypasses the server side CPU which was super impressive. These IOPS & latency figures were more than good  numbers and while I couldn’t verify this, Excelero team mentioned to us that the server configs used for this demo only cost them $13,000 each which, if true, is a massive cost saving that is potentially available for all future Excelero customers here.

With RDDA, IO performance is now offloaded from the storage node CPU to the RNIC & the Ethernet fabric and while their saturation points are much higher than in a traditional architecture that relies on host CPU, the capabilities of the fabric the RNIC cards are now likely going to be the performance bottleneck in the future so if someone’s architecting a solution using Excelero, it would be pretty important to make the appropriate design choices on the fabric and the RNIC cards to make the solution future proof.

 

Solution Licensing & Costs

We didn’t discuss much around the costs and we were not privy to list pricing, however its very likely that that the licensing & costs would be,

  • Flexible pricing, and likely going to cost similar to that of a matching All Flash Array but Excelero would provide around 20-30% more performance
  • Can be licensed per NVMe drive or per server (Not penalised for capacity of the drive which is good)
    • Hyper-Converged: Storage nodes where the storage is brought local to application servers (no hypervisor involved)
    • Centralized (Converged): Separate price for storage nodes and client nodes

 

Customer Case Studies

Despite being a fresh start-up, they have some impressive customers already on board such as NASA, PayPal, Dell, Micron, Broadcom, HPe, Intel and most importantly they are an integral part of the LinkedIn’s Open19 project (New open standard for servers, storage & networking – I will produce a separate article on Open 19 in the future and much line the OCP project, its going to help define the datacenter of tomorrow).

In the case of the NASA’s deployment of Excelero, we were made to understand that the solution is capable of doing around 140GB/s write throughput across 128 servers which is astonishing.

Some popular customer case studies that are publicly referenceble are as follows

 

My Thoughts

Put simply, I like their solution offering….! A lot…!! – No wait… that’s an understatement. I absolutely love their solution offering and the level of innovation they seem to have put in to it. For certain workloads that are pure performance centric and cares less about advanced data services, I think Excelero solution would be the one to beat, at least in the short term in the industry.

As of right now, if you look at all hardware and software defined storage solutions that are generally available to purchase in the market, in my view, Excelero has a unique offering when it comes to its target market. Some of those uniqueness comes from the below points

  • They have the only virtual SAN (SDS) solution that will harness shared NVMe (NVMe over fabric) in the market today
  • Unified NVMe which enables sharing storage across a network but still accessed at local speeds and latencies
  • No CPU impact for storage IO on the storage node
  • Flexible architecture that provide hyper-converged as well as converged (disaggregated) architectures

There are a ton of SDS solutions out there in the market, some from the legacy storage vendors such as Cisco, HPe, EMC as well as dedicated SDS vendor start-ups (that are no longer start-ups such as Nutanix, VMware vSAN, HedVig…etc.) and typically most of these solutions will look at using industry standard disk drives with industry standard server hardware (X86) plus, their own storage software stack on top, typically as a dedicated (read “centralised”) storage node or as a hyper-converged offering (read “de-centralised”).  Excelero is no different from an architectural perspective to most of these SDS vendors. However due to its  additional unique capabilities such as RDDA & NVMesh, their SDS solution stack is likely going to be very attractive for most high end, ultra low latency storage requirements where no other SDS or a even a purpose build All Flash Array will no longer be able to compete anymore. This is precisely Excelero’s initial target market and I would presume they would do very well within that segment, provided that they get their marketing message right.

However given the current lack of advanced data services, it is unlikely that they’d replace more mature, HCI or SDS offerings that are much richer in advanced data services such as VMware VSAN, Nutanix  as well as All Flash Array offerings from the likes of NetApp (All Flash FAS or SolidFire), HPe 3Par All Flash, EMC XtremIO, when  it comes to more common purpose mixed use cases, such as virtualisation platforms or VDI. Having said that, you can argue that Excelero’s offering is a version 1 product right now and future versions will add these missing advanced data services which will make it equally competitive or even better. Due to the lack of CPU dependency on storage IO processing, Excelero can afford to overload it’s storage nodes with so many advanced data services, all run inline and always on without any impact on any IO performance which is a big headache other All Flash Storage Array or SDS vendors cannot avoid. So in theory, Excelero’s storage platform in time could be even superior than it is today.

At present, Excelero has 2 Go To Market (GTM) routes.

  1. The obvious one is direct to customers in order to address their key use case (ultra low latency, high IOPS / Throughput use cases)
  2. The other route to market is working with OEM manufacturers

While Excelero will continue to enhance their core offering available under 1st GTM route above, I can see many other OEM vendors such as the legacy storage vendors wanting to license Excelero’s patented technology such as RDDA in order to be used in their own storage systems and this could well be quite popular revenue stream for Excelero. In my view, having an awesome technology doesn’t necessarily ensure the survival of a tech start up and this 2nd GTM route may well be the obvious starting point for them, though then they are potentially limiting the technical advantage they have on the GTM 1.

Either way, they have an awesome, credible and based on current customers, a popular storage technology and I sincerely hope the business leaders within the company will make the strategically best decisions on how to monetize this exciting technology they have. Given the lack of similar technology from other vendors at the moment, Excelero may have a slight advantage but the competition is unlikely to sit and wait as they too will likely work on similar but different architectures. Intel have already hinted to us during our session at #SFD12 that there may well be a newer replacement to NAND based drives coming out soon from Intel and pretty soon, NVMesh & RDDA could well be a thing of the past before you know it so I hope my friends in Excelero act fast.

Finally, a group photo of the Excelero team that presented to us along with the #SFD12 delegates panel 🙂

Additional Reading

If you are keen to explorer further in to Excelero and their solutions, the obvious place to start is their web site but if you would like to watch the recording of the presentation they gave to #SFD12 team, its available here.

 

If you have any questions or thoughts, please feel free to submit a comment below

Slide credit goes to Excelero and Tech Field Day

Thanks

Chan

#SDF12 #Excelero

 

 

 

Storage Field Day (#SFD12) – Vendor line up

Following on from my previous post about a quick intro to Storage Field Day (#SFD12) that I was invited to attend in San Jose this week as an independent thought leader, I wanted to get a quick post out on the list of vendors we are supposed to be seeing. If you are new to what Tech Field Day / Storage Field Day events are, you’ll also find an intro in my above post.

The event is starting tomorrow and I am currently waiting for my flight to SJC at LHR, and its fair to say I am really looking forward to attending the event. Part of that excitement is due to being given the chance to meet a bunch of other key independent thought leaders, community contributors, Technology evangelists from around the world as well as the chance to meet Stephen Foskett (@SFoskett) and the rest of the #TFD crew from Gestalt IT (GestaltIT.com) at the event. But most of that excitement for me is simply due to the awesome (did I say aaawwwesommmmmmeee?) list of vendors that we are supposed to be meeting with to discuss their technologies.

The full list & event agenda goes as follows

Wednesday the 8th

  • Watch the live streaming of the event @ https://livestream.com/accounts/1542415/events/6861449/player?width=460&height=259&enableInfoAndActivity=false&defaultDrawer=&autoPlay=false&mute=false
  • 09:00 – MoSMB presentation
    • MoSMB is a fully compliant, light weight adaptation of SMB3 made available as proprietory offering by Ryussi technologies. In effect its a BMS3 server on Linux & Unix systems. They are not a technology I had come across before so really looking forward to getting to know more about them and their offerings and their partnership with Microsoft…etc.
  • 10:00 – StarWind Presents
    • Again, new technology to me personally, which appears to be a Hyper-Converged appliance that seem to unify commodity server disks and flash with multiple hypervisors. Hyper-Converged platforms are very much of interest to me and I know the industry leading offerings on this front such as VMware VSAN & Nutanix fairly well. So its good to get to know these guys too and understanding what are their Unique Selling Points / differentiators to the big boys.
  • 13:00 – Elastifile Presents
    • Elastic Loud File System from Elastafile is supposed to be able to provide application level distributed file / object system spanning private cloud and public cloud to provide a hybrid cloud data infrastructure. This one is again new to me so keen to understand more about what makes them different to other similar distributed object / storage solutions such as HedVig / Scality from my perspective. Expect my analysis blog post on this one after I’ve met up with them for my initial take!
  • 16:00 – Excelero Presents (hosted at Excelero office in the Silicon Valley)
    • These guys are a new vendor that is literally due to launch themselves on the same day as we speak to them. Effectively they don’t exists quite yet. So quite exciting to find out who they are what they’ve got to offer us in this increasingly growing, rapidly changing world of enterprise IT.
  • 19:00 – Dinner and Reception (Storage Cocktails?) with presenters and friends at Loft Bar and Bistro in San Jose
    • Good networking event with the presenters from the day for peer to peer networking and further questioning on what we’ve heard from them during the day.

Thursday the 9th of March

  • 08:00 (4pm UK time) – Nimble Storage Presents
    • Nimble are a SAN vendor that I am fairly familiar with and have known them for a fairly long time and I also have few friends that work at Nimble UK. To be fair, I was never a very big fan of Nimble personally as a hybrid SAN vendor as I was  more a NetApp, EMC, HPe 3Par kinda person for hybrid SAN offering which I’ve always thought offer the same if not better tech for roughly a similar price point, with the added benefit of being large established vendors. Perhaps I can use this session to understand where Nimble is heading now as an organisation and what differentiators / USP’s they may have compared to big boys and how they plan to stay relevant in an industry which is generally in decline as a whole.
  • 10:45 – NetApp Presents (At NetApp head office in Silicon Valley)
    • Now I know a lot about NetApp :-). NetApp was my main storage skill in the past (still is to a good level) and I have always been very close to most NetApp technologies, from both presales and deliver perspective and was also awarded as the NetApp partner System Engineer of the Year (2013) for UK & Ireland by NetApp. However since the introduction of cDOT properly to their portfolio, I’ve always felt like they’ve lost a little market traction a little. I’m very keen to listen to NetApp’s current messaging and understand where their heads are at, and how their new technology stack including SolidFire is going to be positioned against other larger vendors such as Dell EMC, HPe 3Par as well as all the disruption from Software Defined storage vendors.
  • 12:45 (20:45 UK time) – Lunch at NetApp with Dave Hitz
    • Dave Hitz  (@DaveHitz) who was the NetApp founder is a legend… Nuff said!
  • 14:00 – Datera Presents
    • Datera is a high performance elastic block storage vendor and is again quite new to me. So looking forward to understanding more about what they have to offer.
  • 19:30 – San Jose Sharks hockey game at SAP Center
    • Yes, its an evening watching a bit of Ice Hockey which, I’ve never done before. To be clear, Ice Hockey is not one of my favourite sports but happy to take part in the event :0).

Friday the 10th of March

  • 09:00 (17:00 UK time) – SNIA Presents (@Intel Head office)
    • The Storage Networking Industry Association is a non profit organisation made up of various technology vendor companies.
  • 10:30 (18:30 UK time) – Intel Presents (@Intel Head office)
    • I don’t think I need to explain / introduce Intel to anyone. If I must, they kinda make some processors :-). Looking forward to visiting Intel office in the valley.

All and all, its an exciting line up of vendors and some old and some new vendors which I’m looking forward to meeting.

Exciting stuff, cant wait…! Now off to board the flight. See you on the other side!

Chan

 

The impact of digital revolution on software licensing – Or is that the other way around?

I happened to come across the below post which, after reading got me thinking about few things which I thought would be a good idea to write a quick post about and get everyone else’s thoughts too.

http://diginomica.com/2017/02/20/sap-v-diageo-important-ruling-customers-indirect-access-issues/

The article was effectively about a court battle between SAP (an enterprise SW vendor, that in their own admission is “The market leader in enterprise application software”) & Diageo (the drinks manufacturing giant) where SAP was suing Diageo to secure additional licensing revenue for indirect use of the data produced by SAP system software. If you didn’t read the full abstract via the above link, what that essentially meant was that when the data SAP generates (for the legal, fee paying customers that is Diageo) is accessed by a 3rd party, presumably for proving Diageo with a service, that SAP needs to be paid additional licensing revenue for that indirect usage which is the responsibility of Diageo.

In this case, this ability for SAP to claim additional licensing revenue from Diageo was in its contract which was why it was ruled in SAP’s favour by the judge (according to the article). While admitting that I am NOT a legal eagle, this brings the question to my mind that, if this in fact is the final verdict on this case (which I’m sure would be challenged in appeals court…etc.), is this approach fair, especially in a world faced with a massive digital revolution where everything, starting from a small electronic device, to a large multi-node machine, to a piece of software are all connected through digital technologies  to relay data from one to another with the intention of processing and re-processing data as it’s being parsed through each piece of software (disparate consumption)?

In my line of work (IT), almost all IT systems are interconnected and that interconnection is typically there for one system to consume the data produced by another system and the number of hops involved in this inter-connection chain can go up from a couple of systems to few dozen on how digital each customer’s environment is. In a pre “Digital Enterprise” world, typically all these connection hops (i.e. IT Systems) belong to one department, one business unit, or worst case, one organisation and therefore typically licensed to be used by that department / Business unit / Organisation (which covers all the users of that department / business unit / organisation).

But the digital revolution currently sweeping across all forms of industries will increase such inter-connectivity of Software systems to go beyond one organisation as multiple organisations will collaborate through data sharing, often real time across various platforms in order to create a truly digital enterprise. Some of these type of digital integration is already common place, especially amongst finance sector customers…etc. Such digital connectivity of software platforms across organisations will now likely be relevant to many other organisations which previously would have thought that they are mutually exclusive when it comes to their business operations.  So I guess my questions is if the software that underpins those key digital connectivities happened to have such attitude to licensing like SAP did in the instance above, what would be the implications on true digital connectivity, across multiple software platforms? Are we fully aware of the exact small print of each and every software we ever use within our business to fully understand how each one of them define its permitted usage, who’s classed as its users are and where we can connect it to other systems vs where we cannot? How do we know precisely that we are not violating such draconian licensing laws during this multi-platform, API driven, digital inter-connectivity?

What do you think? Just curious on getting people’s views as I’m sure there’s no right or wrong answer. Do you think such draconian licensing rules are wrong or would you argue given a dwindling market for Independent Software Vendors (courtesy of public cloud), they should be allowed to benefit from not just direct but also indirect (2nd and 3rd level) interaction with data initially produced by their software systems? If you would, do you then have the sufficient licensing expertise in house to ensure that you are not violating software licensing agreements that you often sign up to without reading them? If you don’t have in house knowledge, do you have a trusted partner who can advise on such licensing matters in an increasing complex, digitally inter-connected world including Cloud platforms (PaaS, SaaS) and help you achieve a true digitally connected enterprise without paying over the odds for software licenses?

Keen to see what others think!

Cheers

Chan

#DigitalEnterprise #Connected #IoT #DigitalRevolution #Licensing

IP Expo Europe 2015 – My Take….!!

Ok, its been a while since my last blog post…. Been inundated with many things…. Anyway, thought I’d re-awaken the blogging monster by mentioning few words on what I thought of the opening day of the IP EXPO Europe 2015 event that I attended today (for the first time I might add).

IP EXPO Europe event is a 2 day, free to attend IT exhibition event and the 2015 edition started today in ExCel, London. According to their website (http://www.ipexpo.co.uk/), they claim that it is the “Europe’s Number ONE Enterprise IT event”. I’ve always wanted to attend this event in the previous years but due to various other commitments, I couldn’t. I normally attend key IT events like these throughout the year, including VMworld, NetApp Insight, Cisco Live and of course, the great Insight Technology Show that is hosted by my employer Insight annually (link to Manchester version which is due soon is here), mainly to keep a tab on new tech from various vendors as well as to network with people in the industry. I have found some little, hidden gems in terms of technologies that I never knew existed, or had heard about but not really had a chance to look in to in detail, during my previous attendances. IP EXPO was something missing off my list, until today that is as I made an effort to go see what this was about.

IP EXPO Europe this year was held in the ExCel in London (Royal Victoria Dock), and all the information about the event can be found on their website. There were different tracks you can follow (Cloud & Infrastructure, Cyber Security, Datacenter, Data Analytics, DevOps & Unified Communications) and while there were some heavyweight key note speakers around to present sessions (which I didn’t attend by the way), there were also the typical exhibition booths from various vendors covering these tracks. I spent all my time there going to each and every booth to find out about what their technology offerings were to see if there were anything that would stand out (to me) based on what they can offer or meet complex business requirements of today’s world.

Vendors

Once you walk in to the exhibition hall, you are faced with a fairly few vendors and their exhibition booths right in front of you (there were also some resellers and other 3rd parties too). The full list of exhibitors can be found on their website here. Though some of the big ones were there, this being labelled as an Enterprise IT event (number one at that supposedly), number of key enterprise tech vendors that you’d normally expect to see in other neutral IT events like these (for example, the Insight Technology Show), were not there unfortunately. I normally focus on the Server, Storage & Virtualisation technologies in my day job and key players in that arena like Cisco, Microsoft, VMware, Oracle, EMC…etc. were not present (at least they didn’t have their own stands which they usually do). If I may say so (and this is entirely my view btw), most if not all vendors that were present there were small to medium size tech vendors (with very few exceptions) rather than big corporate stalwarts that tend to shape technology trends. To me personally, that was a bit of a disappointment.

My Key Interests

I literally went to every exhibition stand to have a look at their tech offering and spoken to most to see if there was any new / emerging  technology or a vendor with a technology of real interest / uniqueness that I should take notes of.  Usually, when I attend VMware’s VMworld or Cisco Live or even Insight Technology show every year, I come away with a number of key, previously unknown (to me) vendors that offer some key technology that can address those weird, uncommon, yet critical business or IT requirements. Unfortunately, there weren’t many of them at IP EXPO Europe that I came across today (I have to say, I was little biased towards Server, Storage and Virtualisation technologies rather than security or unified comms in my assessment).

The few interesting ones, that were new to me and therefore would be worth an individual mention, are listed below FYI.

  • Kroll Ontrack
    • URL: http://www.krollontrack.co.uk/
    • What they do: Kroll is a market leader in data recovery and seem to have some really cool capability when it comes to recovering data, be that typical consumer data from a failed or a broken hard disk to enterprise data recovery from an enterprise SAN storage like NetApp or even VMware VSAN. I was really keen on their experience in being able to recover data from a failed VMware VSAN cluster which apparently was something even VMware VSAN support team were not able to do. You can read all about it here. So much so, that Kroll Ontrack is now a VMware partner that they back off extreme data recovery work to within VMware support (This was something I was lead to believe by their Systems Engineer, unsure how true it is). Also, they claim to be able to do metadata repairs on failed NetApp disks and you can read about it here which I though was really cool. It goes without saying that you need to architect your enterprise storage systems properly such that in the case of a planned or unplanned failure, you have the ability to restore data from backup copies but in the unlikely event of  needing to recover data from source, Kroll Ontrack seem to come real handy.
  • Delphix
    • URL: http://www.delphix.com/
    • What they do:  Provide data as a service through using software to create virtual, zero space copies of your databases, applications and file systems. This is really cool as they’d take a copy of your database for example (most major database types are supported such as Microsoft SQL server, Oracle, My SQL…etc.) on to the Delphix engine and then present virtual copies of those database files to multiple clients (think Test & Dev copies of the same data) without consuming additional storage underneath (if you are familiar with NetApp FlexClone technology, this is kind of the same). Not only that but such copy provisioning can be self orchestrated through a self serviceable portal and could be a great usecase for Test & Development environments (I know a number of customers on top of my head that would benefit from this straight away). My first question to their System Engineer (yeah I don’t waste time talking to sales people at these events but only to technical people who says it as is) was, well, can you not use this for example to create VDI clones using a gold image (which would work better than linked clones for example) and apparently there are talks underway between the 2 companies already. Again, really interesting technology that I’d keep an eye on which can help solve many corporate IT challenges. More details on  their web site.

Other than these 2, there were few others such as Druva, a converged data protection technology for mobile and endpoint devices, A10 Networks that offers Server Load Balancing, Global server load balancing, Application Delivry Controllers, DDoS Mitigation and Network management solutions and Bomgar that provide Remote Support Software and Privileged Access Management solutions caught my eye too.

Summary – My thoughts….!

IP EXPO Europe is not the same standard as VMworld or Cisco Live or dare I say, even the Insight Technology Show (might be biased on the last one) as the lack of major vendors and their stands was little disappointing. However attending the even could still be of value to those average IT consumer who wants to get a feel for some new and some medium to large scale technology offerings available, especially if you are not a seasoned IT professional who generally tend to stay on top of most things relating to IT. The DevOps track in particular was pretty good I thought as they had most key DevOps vendors out such as Puppet, Chef……etc. available and those DevOps tech are very relevant to the current IT industry where automation and orchestration is in high demand right now. So you’d benefit from visiting those stands and talking to their SE’s available to talk to. It is also a free to attend event (similar to Insight Technology Show) which must be mentioned here as not many folks can afford to send their IT staff to attend expensive paid for events such as VMworld / Microsoft Teched / Cisco Live… It would also be good if you are focused on IT security as the whole event had far more security vendors (or someone with a technology relating to IT security in some shape or form) than any other vendor type, including some of the big players such as F5, Palo Alto & BAE. (I guess the Edward Snowden factor is still playing out on the full swing in peoples minds….!)

Would I come back next year to attend? Well, I’m not sure…. I personally didn’t think I learned or achieved anywhere near what I expected (based on what I’m used to getting, having attended other similar events). Guess I’ll have to see which vendors make up the exhibitors list next year and then decide. Based on todays attendance, I definitely do not agree with their self proclaimed status of this being the “Europe’s number 1 IT event”.

If you do attend this years IP EXPO Europe and found anything interesting, please feel free to post a comment…

Cheers

Chan