VMworld 2017 US – VMware Strategy & My Thoughts

This is a quick post to summerise all the key announcements from VMworld 2017 US event and share my thoughts and insights of the strategy and the direction of VMware, the way I see it.

Key Announcements

A number of announcements were made during the week on products and solutions and below is a high level list of those to recap.

  • Announced the launch of the VMware Cloud Services which consists of 2 main components
    • VMware Cloud on AWS (VMC)
      • Consist of VMware vSphere + vSAN + NSX
      • Running on AWS data centers (bare metal)
      • A complete Public Cloud platform consisting of VMware Software Defined Data Center components
      • Available as a
    • A complete Hybrid-Cloud infrastructure security, management & monitoring & Automation solution made available through a Software as a Service (SaaS) platform
      • Work natively with VMware Cloud on AWS
      • Also work with legacy, on-premises VMware data center
      • Also work with native AWS, Azure and Google public cloud platforms
  • Next generation of network virtualisation solution based NSX-T (aka NSX Multi hypervisor)
    • Version 2.0 announced
    • Supports vSphere & KVM
    • Likely going to be strategically more important to VMware than the NSX-v (vSphere specific NSX that is commongly used today by vSphere customers). Think What ESXi was for VMware when ESX was still around, during early days!

 

 

  • Next version of vRealize Network Insight (version 3.5) released
    • Various cloud platform integrations
    • Additional on-premises 3rd party integrations (Check Point FW, HP OneView, Brocade MLX)
    • Support for additional NSX component integration (IPFIX, Edge dashboard, NSX-v DFW PCI dashboard)

 

  • VMware AppDefense
    • A brand new application security solution that is available via VMware Cloud Services subscription

 

  • VMware Pivotal Container Services (PKS) as a joint collaboration between VMware, Pivotal & Google (Kubernetes)
    • Kubernetes support across the full VMware stack including NSX & vSAN
    • Support for Sever-Less solution capabilities using Functions as a Service (Similar to AWS Lambda or Azure Functions)
    • Enabling persistent storage for stateful applications via the vSphere Cloud Provider, which provides access to vSphere storage powered by vSAN or traditional SAN and NAS storage,
    • Automation and governance via vRealize Automation and provisioning of service provider clouds with vCloud Director,
    • Monitoring and troubleshooting of virtual infrastructure via VMware vRealize Operations
    • Metrics monitoring of containerized applications via Wavefront.

 

  • Workspace One enhancements and updates
    • Single UEM platform for Windows, MacOS, Chrome OS, IOS and Android
    • Integration with unique 3rd party endpoint platform API’s
    • Offer cloud based peer-to-peer SW distribution to deploy large apps at scale
    • Support for managing Chrome devices
    • Provides customers the ability to enforce & manage O365 security policies and DLP alongside all of their applications and devices
    • Workspace One intelligence to provide Insights and automation to enhance user experience (GA Q4 FY18)
  • VMware Integrated OpenStack 4.0 announced
    • OpenStack Ocata integration
    • Additional features include
      • Containerized apps alongside traditional apps in production on OpenStack
      • vRealize Automation integration to enable OpenStack users to use vRealize Automation-based policies and to consume OpenStack components within vRealize Automation blueprints
      • Increased scale and isolation for OpenStack clouds enabled through new multi-VMware vCenter support
    • New pricing & Packaging tier (not free anymore)
  • VMware Skyline
    • A new proactive support offering aligned to global support services
    • Available to Premier support customers (North America initially)
    • Requires an appliance deployment on premise
    • Quicker time to incident resolution

Cross Cloud Architecture Strategy & My Thoughts

VMware announced the Cross Cloud Architecture (CCA) back in VMworld 2016 where they set the vision for VMware to provide the capability to customers to run & manage any application, on any cloud using any device. This was ambitious and was seen as the first step towards VMware recognising that running vSphere on premise should no longer be VMware’s main focus and they want to provide customers with choice.

This choice of platform options were to be,

  • Continue to run vSphere on premise if that is what you want to do
  • OR, let customers run the same vSphere based SDDC stack on the cloud which can be spun up in minutes in a fully automated way (IaaS)
  • OR, run the same workload that used to run on a VMware SDDC platform on a native public cloud platform such as AWS or Azure or Google cloud or IBM Cloud

During that VMworld, VMware also demoed the capability of NSX to bridge all these various private and public cloud platforms through the clever use of NSX to extend networks across all of those platforms. Well, VMworld 2017 has shown additional steps VMware have taken to make this cross cloud architecture even more of a reality. VMware Cloud on AWS (VMC) now lets you spin up a complete VMware based Software Defined Data Center running vSphere on vSAN connected by NSX through a simple web page, much similar to how Azure and AWS native infrastructure platforms allows you to provision VM based infrastructure on demand. Based on some initial articles, this could even be cheaper than running vSphere on-premise which is great news for customers. In addition to this price advantage, when you factor in the rest of Total Cost of Ownership factors such as maintaining on premise skill to set up and manage the infrastructure platforms that are no longer needed, the VMC platform is likely going to be extremely interesting to most customers. And most importantly, most customers will NOT need to go through costly re-architecting of their monolithic application estate to fit a native cloud IaaS platform which simplifies cloud migration of their monolithic application stack. And if that is not enough, you also can carry on managing & securing that workload using the same VMware management and security toolset, even on the cloud too.

When you then consider the announcement of VMware Cloud Services (VCS) offering as a SaaS solution, it now enables integrating a complete VMware hybrid cloud management toolset in to various platforms and workloads, irrespective of where they reside. VCS enables the discovery, monitoring, management and securing of those workloads across different platforms, all through a single pane of glass which is a pretty powerful message that no other public cloud provider can claim to provide in such a heterogeneous manner. This holistic management and security platform allows customers to provision, manage and secure any workload (Monolithic or Microservices based) on any platform (vSphere on premise, VMC on AWS, native AWS, native Azure, Native Google cloud) to be accessed on any device (workstation, laptop, Pad or a mobile). That to me is a true Cross Cloud vision becoming a reality and my guess is once the platform matures and capabilities increase, this is going to be very popular amongst almost all customers.

In addition to this CCA capabilities, VMware obviously appear to be shifting their focus from the infrastructure layer (read “virtual machine”) to the actual application layer, focusing more on enabling application transformation and application security which is great to see. As many have already, VMware too are embracing the concept of containers, not only as a better application architecture but also as the best way to decouple the application from the underlying infrastructure and using containers as a shipping mechanism to enable moving applications across to public cloud (& back). The announcement of various integrations within their infrastructure stack to Docker ecosystem such as Kubernetes testifies to this and would likely be welcomed by customers. I’d expect such integration to continue to improve across all of VMware’s SDDC infrastructure stack. With VMware solutions, you can now deploy container based applications on on-premise vSphere using VIC or Photon or even VMC or a native public cloud platform, store them on vSAN with volume plugins on premise or on cloud, extend the network to the container instance via NSX (on premise or on cloud), extend visibility in to container instance via vRNI and vROPS (on premise or cloud) and also automate provisioning or most importantly, migration of these container apps across on-premise or public cloud platforms as you see fit.

NSX cloud for example will let you extend all the unique capabilities of software defined networking such as micro-segmentation, security groups and overlay network extensions to not just within private data centers but also to native public cloud platforms such as AWS & Azure (roadmap) which enriches the capabilities of a public cloud platform and increases the security available within the network.

My Thoughts

All in all, it was a great VMworld where VMware have genuinely showcased their Hybrid Cloud and Cross Cloud Architecture strategy. As a technologist that have been working with VMware for a while, it was pretty obvious that a software centric organisation like VMware, similar to the likes of Microsoft was always gonna embrace changes, especially changes driven by software such as the public cloud. However most people, especially sales people in the industry I work in as well as some of the customers were starting to worry about the future of VMware and their relevance in the increasingly Cloudy world ahead. This VMworld has showcased to all of those how VMware has got a very good working strategy to embrace that software defined cloud adoption and empower customers by giving them the choice to do the same, without any tie in to a specific cloud platform. The soaring, all time high VMware share price is a testament that analysts and industry experts agree with this too.

If I was a customer, I would want nothing more!

Keen to get your thoughts, please submit via comments below

Other Minor VMworld 2017 (Vegas) Announcements

  • New VMware & HPe partnership for DaaS
    • Include Workspace ONE to HPe DaaS
    • Include Unified Endpoint Management through Airwatch
  • Dell EMC to offer data protection to VMC (VMware Cloud on AWS)
    • Include Data Domain & Data protection app suite
    • Self-service capability
  • VCF related announcements
    • CenturyLink, Fujitsu & Rackspace to offer VCF + Services
    • New HCI and CI platforms (VxRack SDDC, HDS UCP-RS, Fujitsu PRIMEFLEX, QCT QxStack
    • New VCF HW partners
      • Cisco
      • HDS
      • Fujitsu
      • Lenovo
  • vCloud Director v9 announced
    • GA Q3 FY18
  • New vSphere scale-out edition
    • Aimed at Big data and HPC workloads
    • Attractive price point
    • Big data specific features and resource optimisation within vSphere
    • Includes vDS
  • VMware Validated Design (VVD) 4.1 released
    • Include a new optional consolidated DC architecture for small deployments
  • New VMware and Fujitsu partnerships
    • Fujitsu Cloud Services to delivery VMware Cloud Services
  • DXC Technology partnership
    • Managed Cloud service with VMC
    • Workload portability between VMC, DXC DCs and customer’s own DCs
  • Re-announced VMware Pulse IoT Center  with further integration to VMware solutions stack to manage IoT components

 

Cheers

Chan

Introduction To VMware App Defense – Application Security as a Service

Yesterday at VMworld 2017 US, VMware annouced the launch of AppDefense. This post is a quick introduction to look a little closely at what it is & my initial thoughts on it.

AppDefense – What is it?

AppDefense is a solution that uses the Hypervisor to introspect the guest VM application behaviour. It involves analysing the applicaiton (within guest VM) behaviourestablishing its normaly operational behaviour (intended state) & once verified to be the accurate, constantly measuring the future state of those applications against the intended state & least privilege posture and controlling / remediating its behaviour should non-conformance is detected. The aim is increase application security to detect infiltrations at the application layer and automatically prevent propogation of those infiltrations untill remediation.

AppDefense is a cloud hosted managed solution (SaaS) from VMware that is hosted on AWS (https://appdefense.vmware.com) that is managed by VMware rather than an onpremises based monitoring & management solution. It is a key part of the SaaS solution stack VMware also announced yesterday, VMware Cloud Services. (A separate detailed post to follow about VMware Cloud Services)

If you know VMware NSX, you know that NSX will provide least privillege execution environment to prevent attacks or propogation of security attacks through enforcing least privillege at the network level (Micro-Segmentation). AppDefense adds an additional layer by enforcing the same least privillege model to the actual application layer as well within the VM’s guest OS.

AppDefense – How does it work?

The high level stages employed by AppDefense in identifying and providing application security consist of the following high level steps (based on what I understand as of now).

  1. Application base lining (Intended State):  Automatically identifying the normal behavious of an application and producing a baseline for the application based on its “normal” behavioural patters (Intended state).                                                    This intended state can come from analyzing normal, un-infected application behaviour within the guest or even from external application state definition platforms such as Puppet…etc. Pretty cool that is I think!  
  2. Detection:  It will then constantly monitor the application behaviour against this baseline to see if there are any deviations which could amont to potential malicious behaviuours. If any are detected, AppDefense will either block those alien application activities or automatically isolate the application using the Hypervisor constructs, in a similar manner to how NSX & 3rd party AV tools auto isolate guest introspection using heuristic analysis. AppDefense uses an in-memory process anomaly detector rather than taking a hash of the VM file set (which is often how 3rd party security vendors work) which is going to be a unique selling point, in comparison to typical AV tools. An example demo showed by VMware was on an application server that ordinarily talks to a DB server using a SQl server ODBC connectivity, where once protected by AppDefense, it automaticlaly blocks any other form of direct connectivity from that app server to the DB server (say a Powershell query or a script running on the app server for example) even if that happened to be on the same port that is already permitted. – That was pretty cool if you ask me.  
  3. Automated remediation:  Similar to above, it can then take remediation action to automatically prevent propogation.

 

AppDefense Architecture

AppDefense, despite being a SaaS application, will work with cloud (VMware Cloud on AWS) as well as on-premises enviornment. The onpremises proxy appliance will act as the broker. Future road map items will include extending capabilities to non vSphere as well as bare metal workloads onpremises. There will be an agent that is deployed in to the VM’s (guest agent) that will run inside a secure memory space to ensure it’s authenticity.

For the on-premis version, vCenter is the only mandatory pre-req whereas NSX mgr and vRA are optional and only required for remediation and provisioning. (No current plans for Security Manager to be available onsite, yet).

AppDefense Integration with 3rd parties*

  • IBM Security:
    • AppDefense plans to integrate with IBM’s QRadar security analytics platform, enabling security teams to understand and respond to advanced and insider threats that cut across both on-premises and cloud environments like IBM Cloud. IBM Security and VMware will collaborate to build this integrated offering as an app delivered via the IBM Security App Exchange, providing mutual customers with greater visibility and control across virtualized workloads without having to switch between disparate security tools, helping organizations secure their critical data and remain compliant.
  • RSA:
    • RSA NetWitness Suite will be interoperable with AppDefense, leveraging it for deeper application context within an enterprise’s virtual datacenter, response automation/orchestration, and visibility into application attacks. RSA NetWitness Endpoint will be interoperable with AppDefense to inspect unique processes for suspicious behaviors and enable either a Security Analyst or AppDefense Administrators to block malicious behaviors before they can impact the broader datacenter.
  • Carbon Black:
    • AppDefense will leverage Carbon Black reputation feeds to help secure virtual environments. Using Carbon Black’s reputation classification, security teams can triage alerts faster by automatically determining which behaviors require additional verification and which behaviors can be pre-approved. Reputation data will also allow for auto-updates to the manifest when upgrading software to drastically reduce the number of false positives that can be common in whitelisting.
  • SecureWorks:
    • SecureWorks is developing a new solution that leverages AppDefense. The new solution will be part of the SecureWorks Cloud Guardian™ portfolio and will deliver security detection, validation, and response capabilities across a client’s virtual environment. This solution will leverage SecureWorks’ global Threat Intelligence, and will enable organizations to hand off the challenge of developing, tuning and enforcing the security policies that protect their virtual environments to a team of experts with nearly two decades of experience in managed services.
  • Puppet:
    • Puppet Enterprise is integrated with AppDefense, providing visibility and insight into the desired configuration of VMs, assisting in distinguishing between authorized changes and malicious behavior

*Credit: VMware AppDefense release news

Having spoken to the product managers, my guess is these partnerships will grow as the product goes through its evolution to include many more security vendors.

 

Comparison to competition

In comparison to other 3rd party AV tools that have heuristic analysis tools that does similar anomaly detection within the guests, VMware AppDefense is supposed to have a number of unique selling points such as the ability to better understand distributed application behaviours than competition to reduce false positives, the ability to not jut detect but also take remediation orchesatration capabilities (through the use of vRA and NSX) as well as the near future roadmap to use Machine learning capabilities to enhance anomaly detection within the guest which is pretty cool.

Understanding the “Intended state”

Inteded state can come from various information collected from various data center state definition tools such as vCenter, Puppet, vRealize Automation & othr configuraoin management solutions as well as devlopper workflows such as Ansible, Jenkins…etc.

App Defense agent (runs in the guest OS) runs in a protected memory space within the guest (via the hypervisor) to store the security controls that is tamper proof (secure runtime). Any attempts to intrude in to this space are detected and actioned upon automatically. While this is secure, it’s not guranteed at the HW layer (Think HyTrust that uses Intel CPU capabilities such as TXT to achieve HW root of trust), though I suspect this will inevitably come down the line.

 

AppDefense – My (initial) Thoughts

I like the sound of it and its capabilities based on what I’ve seen today. Obviously its a SaaS based application and some people may not like that to monitor and enforce your security, especially if you have an on-premises environment that you’d like to monitor and manage security on, but if you can get over that mindset, this could be potentially quite good. But obviously if you use VMware Cloud Services, especially VMware Cloud on AWS for example, this would have direct integration with that platform to enforce application level security which could be quite handy. As with all products however, the devil is normally in the detail and the this version has only just been released so the details available is quite scarse in order to form a detailed & an accurate opinion. I will be aiming to test this out in detail in the coming future, both with VMware cloud on AWS as well as On-Premises VMware SDDC stack and provide some detailed insights. Furthermore, its a version 1.0 product and realistically, most production customers will likely wait until its battle hardened and becomes richer with capabilities such as using Hardware root of trust capabilities are added before using this for key production workloads.

However until then, its great to see VMware are focusing more on security in general and building in native, differentiated security capabilities focusing on the application layer which is equally important as the security at the infrastructure layer. I’m sure the product will evolve to incorporate things such as AI & machine learning to provide more sophisticated preventive measures in the future. The ability to taken static applicatio / VM state definitions from external platforms like Puppet is really useful and I suspect would probably be where this would be popular with customers, at least initially.

Slide credits go to VMware.!

Cheers

Chan

VMworld 2017 – vSAN New Announcements & Updates

During VMworld 2017 Vegas, a number of vSAN related product announcements will have been made and I was privy to some of those a little earlier than the rest of the general public, due being a vSAN vExpert. I’ve summerised those below. The embargo on disclosing the details lifts at 3pm PST which is when this blog post is sheduled to go live automatically. So enjoy! 🙂

vSAN Customer Adoption

As some of you may know, popularity of vSAN has been growing for a while now as a preferred alternative to legacy SAN vendors when it comes to storing vSphere workloads. The below stats somewhat confirms this growth. I too can testify to this personally as I’ve seen a similar increase to the number of our own customers that consider vSAN as the default choice for storage now.

Key new Announcements

New vSAN based HCI Acceleration kit availability

This is a new ready node program being announced with some OEM HW vendors to provide distributed data center services for data centers to keep edge computing platforms. Consider this to be somewhat in between vSAN RoBo solution and the full blown main data center vSAN solution. Highlights of the offering are as follows

  • 3 x Single socket servers
  • Include vSphere STD + vSAN STD (vCenter is excluded)
  • Launch HW partners limited to Fujitsu, Lenovo, Dell & Super Micro only
  • 25% default discount on list price (on both HW & SW)
  • $25K starting price

           

 

  • My thoughts: Potentially a good move an interesting option for those customers who have a main DC elsewhere or are primarily cloud based (included VMware Cloud on AWS). The practicality of vSAN RoBo was always hampered by the fact that its limited to 25 VMs on 2 nodes. This should slightly increase that market adoption, however the key decision would be the pricing. Noticeably HPe are absent from the initial launch but I’m guessing they will eventually sign up. Note you have to have an existing vCenter license elsewhere as its not included by default.

vSAN Native Snapshots Announced

Tech preview of the native vSAN data protection capabilities through snapshots have been announced and will likely be generally available in FY18. vSAN native snapshots will have the following characteristics.

  • Snapshots are all policy driven
  • 5 mins RPO
  • 100 snapshots per VM
  • Support data efficiency services such as dedupe as well as protection services such as encryption
  • Archival of snapshots will be available to secondary object or NAS storage (no specific vendor support required) or even Cloud (S3?)
  • Replication of snapshots will be available to a DR site.

  • My thoughts: This was a hot request and something that was long time coming. Most vSAN solutions need a 3rd party data center back up product today and often, SAN vendors used to provide this type of snapshot based backup solution from the array (NetApp Snap Manager suite for example) that vSAN couldn’t match. Well, it can now, and since its done at the SW layer, its array independent and you can replicate or archive that anywhere, even on cloud and this would be more than sufficient for lots of customers with a smaller or a point use case to not bother buying backup licenses elsewhere to protect that vSphere workload. This is likely going to be popular. I will be testing this out in our lab as soon as the beta code is available to ensure the snaps don’t have a performance penalty.

 

vSAN on VMware Cloud on AWS Announced

Well, this is not massively new but vSAN is a key part of VMware Cloud on AWS and the vSAN storage layer provide all the on premise vSAN goodness while also providing DR to VMware Cloud capability (using snap replication) and orchestration via SRM.

 

vSAN Storage Platform for Containers Announced

Similar to the NSX-T annoucement with K8 (Kubernetes) support, vSAN also provide persistent storage presentation to both K8 as well as Docker container instances in order to run stateful containers.

 
This capability came from the vmware OpenSource project code named project Hatchway and its freely available via GitHub https://vmware.github.io/hatchway/ now.

  • My thoughts: I really like this one and the approach VMware are taking with the product set to be more and more microservices (container based application) friendly. This capability came from an opensource VMware project called Project hatchway and will likely be popular with many. This code was supposed to be available on GitHub as this is an opensource project but I have not been able to see anything within the VMware repo’s on GitHub yet.

 

So, all in all, not very many large or significant announcements for vSAN from VMworld 2017 Vegas (yet), but this is to be expected as the latest version of vSAN 6.6.1 was only recently released with a ton of updates. The key take aways for me is that the popularity of vSAN is obviously growing (well I knew this already anyways) and the current and future announcements are going to be making vSAN a fully fledged SAN / NAS replacement for vSphere storage with more and more native security, efficiency and availability services which is great for the customers.

Cheers

Chan

 

VMworld 2017 – Network & Security Announcements

As an NSX vExpert, I was privy to an early access preview of some of the NSX updates that are due to be announced during VMworld 2017 Vegas – This is a summary of all those announcements and, while I prepped the post before VMworld announcements were made to general public, by the time this post is published and you read this, it will be all public knowledge.

Growing NSX market

    

  • NSX customer momentum is very good, with around 2600+ new NSX customers in Q2 which is twice that year ago.
  • Typical customer sectors are from all around, Service providers, healthcare, Finance, Technology, Public sector…etc
  • Typical use cases belong to Security (Micro-Segmentation, EUC & DMZ anywhere), Automation and App continuity (DR, Cross cloud)

Since I work for a VMware partner (reseller), I can relate to these points first hand as I see the same kind of customers and use cases being explored by our own customers so these appear to be valid, credible market observations.

Network & Security New Announcements (VMworld 2017)

Given below are some of the key NSX related announcements that will be / have been annouced today at VMworld 2017 Vegas.

  • NSX-T 2.0 release
    • On-premise network virtualisation for non-vSphere workloads
    • Cloud Native App framework (K8 integration)
    • Cloud integration (Native AWS platform)
  • VMware Cloud Services
    • Number of new SaaS solution offerings available from VMware
    • The list of initial solutions available as SaaS offering at launch include
      • Discovery
      • Cost insights
      • Wavefront
      • Network insight
      • NSX Cloud & VMware Cloud on AWS
        • Well, this is not news, but been pre-announced already
      • VMware AppDefense
        • A brand new data center endpoint security product
      • Workspace One (Cloud)
  • vRealize Network Insight 3.5
    • vRNI finally expanding to cloud and additional on-premise 3rd party integrations

NSX-T 2.0 release (For secure networking in the cloud)

So this is VMware’s next big bet. Simply because it is NSX-T that will enable the future of VMware’s cross cloud capabilities by being compatible with all other none vSphere platforms including public cloud platforms to extend the NSX networking constructs in to those environments. Given below are the highlights of NSX-T 2.0 announced today.

  • On-premise automation and networking & Security (Support for KVM)
    • Multi-Domain networking
    • Automation with OpenStack
    • Micro-Segmentation
  • Cloud Native Application Frameworks
    • VMs and containers
    • CNI plugin integration for Kubernetes
    • Micro-Segmentation for containers / Microservices via NST-T 2.0 (roadmap)
    • Monitoring & analytics for containers / Microservices via NST-T 2.0 (roadmap)
  • Public Cloud Integration
    • On-prem, Remote, Public & Hybrid DCs
    • Native AWS with VMware secure networking – Extends out NSX security constructs to legacy AWS network

NSX-T integration with Container as a Service (Kubernetes for example) and Platform as a Service (AWS native networking) components a NSX container plugin and the architecture is shown below.

  

 

VMware Cloud Services (VCS)

This is a much anticipated annoucement where NSX secure networking capabilities will be offerred as a hosted service (PaaS) from VMware – Known as “VMware Cloud Services portal”. This will integrated with various platforms such as your on-premise private cloud environment or even public cloud platforms such as native AWS (available now) or Azure (roadmap) by automatically deploying the NSX-T components required on each cloud platform (NSX manager, NSX controllers & Cloud Services Manager appliance…etc). This is TOTALLY COOL and unlike any other solution available from anyone else.

As a whole, VCS will provide the following capabilities on day 1, as a SaaS / PaaS offering.

 

Key capabilities this PaaS provide include the below, across the full Hybrid-Cloud stack.

  • Discovery
  • Networking Insights
  • Cost management / insight
  • Secure Networking (via NSX Cloud)

The beauty of this offering is this will provide all the NSX natice capabilities such as distributed routing, Micro segmentation, Distributed switching, data encryption and deep visibility in to networking traffic will all be extended out to any part of the hybrid cloud platform. Ths enables you to manage and monitor as well as define & enforce new controls through a single set of security policies, defined on a single pane of glass, via Cloud Services portal. For example, the ditributed  firewall policies that block App servers from talking direct to DB servers, once defined, will apply to the VMs whether they are on premise on vSphere or moved to KVM or even when they reside on AWS (VMware cloud or native AWS platform). All of this becomes possible through the integration of NSX-T with cloud platforms that enables additional services using network overlay. In the case of public cloud, this will provide all these additional capabilities that are not natively available on the cloud platform which is awesome.!

I cannot wait for VCS to have Azure integration also which I believe is in the roadmap.

I will be doing a detailed post on VCS soon.

VMware Cloud on AWS

  

Well, this is NOT news anymore as this was annouced last year during VMworld 2016 and since then technical previews and public beta of the platform has already been available. So I’m not going to cover this in detail as there are plenty of overview posts out there.

However the main announcement today on this front is it is NOW (FINALLY) GENERALLY AVAILABLE for customers to subscribe to. Yey!! The minimum subscription is based on 4 nodes (vSAN requirements).

 

VMware App Defence

A brand new offering that is being annouced today that will extend the security layer to the applications. This is pretty cool too and I’m not going to cover the details here. instead, I will be producing a dedicated post to cover this one later on this week.

Summary

All of these annoucements are the implementation milestones of the VMware Cross Cloud Architecture (CCA) initiative that VMware annouced during last year’s VMworld where VMware will enable the workload mobility between on-premise and all other various public cloud platforms, primarily through the use of NSX to extend the network fabric acorss all platforms. Customers can build these components out themselves or they can simply consume them via a hosted PaaS. I am very excited for all my customers as these capabilities will help you harness the best of each cloud platform and new technology innovations such as containers without loosing the end to end visibility, management capabilities and security of your multi-cloud infrastructure platform.

Cheers

Chan

Heading to VMworld 2017 in Vegas

For the 2nd year running, I’ve been extremely lucky to be able to attend the VMware’s premier technology roadshow, VMworld in the city that never sleeps. This is my 6th consecutive VMworld where I’ve attended the 2012-2015 events at Barcelona and the 2016 event in Vegas. Similar to the last year, I’ve been extremely lucky to be selected and be invited by VMware as an official VMworld blogger due to my vExpert status to attend the event free of charge. (Also well done to my fellow Insight teammate & vExpert Kyle Jenner for being picked to attend VMworld 2017 Europe as an official blogger too). Obviously we are both very lucky to have an employer who value our attendance at such industry events and is happy to foot the bill for other expenses such as logistics which is also appreciated. So thanks VMware & Insight UK.

I attended the VMworld 2016 also in Vegas and to be honest, that was probably not the best event to attend that year in hindsight as all the new announcements were reserved for the European edition a month after. However this year, the word on the street is that VMworld US will carry majority of the new announcements so I am very excited to find out about them before anyone else.!

VMworld 2017 Itineraries

Most people attending VMworld or any similar tech conference overseas would typically travel few days earlier or stay behind few days after the event to explore things around. Unfortunately for me and my in-explicable dedication to playing league cricket between April-September, I am only able to travel out on Sunday the 27th after the game on Saturday. Similarly I have to get back immediately after the event in time for the following Saturday’s game. Silly you might think! I’d tend to agree too.

  • Travel out: Sunday the 27th of August from Manchester to Las Vegas (Thomas Cook – direct flight)
  • Accommodation: Delano Las Vegas (next door to event venue which is Mandalay Bay Hotel)
  • Travel back: Thursday the 31st of August from Las Vegas to Manchester (Thomas Cook – direct flight)

 

Session planning

one of the most important thing one planning on attending VMworld should do (if you wanna genuinely learn something at the event that is), to plan your break out sessions that you want to attend in advance using the schedule builder. This year, I was very luck to be able to get this booked in almost as soon as the schedule builder went live. However even then, some of the popular sessions were fully booked which shows how popular this event is.

Given below is a list of my planned sessions

  • Sunday the 27th of August
    • 4-4:30pm – How to Use CloudFormations in vRealize Automation to Build Hybrid Applications That Span and Reside On-Premises & on VMware Cloud on AWS and AWS Cloud [MMC1464QU]

 

  • Monday the 28th of August
    • 9am-10:30am – General session (you can find me at the specialist blogger seats right at the front of the hall)
    • 12:30-1:30pm – Accelerate the Hybrid Cloud with VMware Cloud on AWS [LHC3159SU]
    • 2:30-3:30pm – Addressing your General Data Protection Regulation (GDPR) Challenges with Security and Compliance Automation Based on VMware Cloud Foundation [GRC3386BUS]
    • 3:30-4:30pm – Big Data for the 99% (of Enterprises) [FUT2634PU]
    • 5:30-6:30pm – VMC Hybrid Cloud Architectural Deep Dive: Networking and Storage Best Practices [LHC3375BUS]

 

  • Tuesday the 29th of August
    • 9am-10:30am – General session (you can find me at the specialist blogger seats right at the front of the hall)
    • 1-2pm – A Two-Day VMware vRealize Operations Manager Customer Success Workshop in 60 Minutes [MGT2768GU]
    • 2-3pm – AWS Native Services Integration with VMware Cloud on AWS: Technical Deep Dive [LHC3376BUS]
    • 3-6:30pm – VMware NSX Community Leaders (vExperts) Summit at Luxor hotel
    • 7-10pm – vExpert Reception – VMworld U.S. 2017 at Pinball Hall of Fame
    • 10pm-12am – Rubrik VMworld Party (Featuring none other than Ice Cube) at Marquee @ Cosmopolitan

 

  • Wednesday the 30th of August
    • 10-11am – Automating vSAN Deployments at Any Scale [STO1119GU]
    • 11-12am – Creating Your VMware Cloud on AWS Data Center: VMware Cloud on AWS Fundamentals [LHC1547BU]
    • 12:30-1:30pm – 3 Ways to Use VMware’s New Cloud Services for Operations to Efficiently Run Workloads Across AWS, Azure and vSphere: VMware and Customer Technical Session [MMC3074BU]
    • 3:30-4:30pm – Intriguing Integrations with VMware Cloud on AWS, EC2, S3, Lambda, and More [LHC2281BU]
    • 7-10pm – VMworld Customer Appreciation Party

 

  • Thursday the 31st of August
    • 10:30-11:30am – NSX and VMware Cloud on AWS: Deep Dive [LHC2103BU]

 

I have left some time in between sessions for blogging activities, various meetings, networking sessions and hall crawl which are also equally important as attending breakout sessions (If anything those are more important as the breakout session content will always be available online afterwards)

Thoughts & Predictions

VMworld is always a good event to attend and going by past experience, its a great event for finding out about new VMware initiatives and announcements as well as all the related partner ecosystem solutions, from the established big boys as well as relatively new or up and coming start-up’s that work with VMware technologies to offer new ways to solve todays business problems. I don’t see this year’s event being any different and my guess would be a lot of focus would be given to VMware’s Cross cloud architecture (announced last year) and everything related to that this year. Such things could include the availability of VMware Cloud on AWS and potentially some NSX related announcements that can facilitate this cross cloud architecture for the customers. We will have to wait and see obviously.

I will be aiming to get a daily summary blog out summarising key announcements from the day and any new or exciting solutions I come across. You can follow me on Twitter also for some live commentary throughout the day.

If you are a VMware customer or a partner, I would highly encourage you to attend VMworld at least once. It is a great event for learning new things, but also most importantly, its a great place to meet and gain access to back end VMware engineering staff that average people never get to see or interact with. This is very valuable if you are a techie. Also if you are a business person, you can network with key VMware executives and product managers to understand the future strategy of their product lines and also, collectively that of VMware.

 

Apple WWDC 2017 – Artificial Intelligence, Virtual Reality & Mixed Reality

Introduction

As a technologist, I like to stay close to key new developments & trends in the world of digital technology to understand how these can help users address common day to day problems more efficiently. Digital disruption and technologies behind that such as Artificial Intelligence (AI), IoTVirtual Reality (VR), Augmented Reality (AR) & Mixed Reality (MR) are hot topics as they have the potential to significantly reshape how consumers will consume products and services going forward. I am a keen follower on these disruptive technologies because the potential impact they can have on traditional businesses in an increasingly digital, connected world is huge in my view.

Something I’ve heard today coming out of Apple, the largest tech vendor on the planet about how they intend on using various AI technologies along with VR and AR technologies in their next product upgrades across the iPhone, iPad, Apple Watch, App store, Mac…etc made me want to summarise those announcements and add my thoughts on how Apple will potentially lead the way to mass adoption of such digital technologies by many organisations of tomorrow.

Apple’s WWDC 2017 announcements

I’ve been an Apple fan since the first iPhone launch as they have been the prime example when it comes to tech vendors who utilizes cutting edge IT technologies to provide an elegant solution to address day to day requirements in a simple and effective manner that providers a rich user experience. I practically live on my iPhone every day for work and non-work related activities and also appreciate their other ecosystem products such as the MacBook, Apple watch, Apple TV and the iPad. This is typically not because they are technologically so advanced, but simply because they provide a simple, seamless user experience when it comes to using them to increase my productivity during day to day activities.

So naturally I was keen on finding out about the latest announcements that came out of Apple’s latest World Wide Developer Conference event that was held earlier today in San Jose. Having listened to the event and the announcements, I was excited by the new product and software upgrades announced but more than that, I was super excited about couple of related technology integrations Apple are coming out with which include a mix of AI, VR & AR to provide an even better user experience by integrating these technology advancements in to their product offerings.

Now before I go any further, I want to highlight this is NOT a summary of their new product announcements. What interested me out of these announcements were not so much the new apple products, but mainly how Apple, as a pioneer in using cutting edge technologies to create positive user experiences like no other technology vendor on the planet, are going to be using these potentially revolutionary digital technologies to provide a hugely positive user experience. This is relevant to every single business out there that manufacture a product, provides a service or solution offering to their customers as anyone can potentially look to incorporate the same capabilities in a similar or even a more creative and an innovative manner than Apple to provide a positive user experience in a similar fashion.

Use of Artificial Intelligence

Today Apple announced the increased use of various AI technologies everywhere within the future apple products as summarised below

  • Increased use of Artificial Intelligence technologies by the personal assistant “Siri”, to provide a more positive & a more personalised user experience
    • In the upcoming version of the Watch OS 4 for Apple Watch, AI technologies such as Machine Learning is going to be used to power the new Siri face such that Siri can now provide you with dynamic updates that are specifically relevant to you and what you do (context awareness)
    • The new iOS 11 will include a new voice for Siri, which now uses Deep Learning Technologies (AI) behind the scene to offer a more natural and expressive voice that sounds less machine and more human.
    • Siri will also use Machine Learning on each device (“On device learning”) to understand specifically what’s more relevant to you based on what you do on your device so that more personalised interactions can be made by Siri – In other words, Siri is becoming more context aware thanks to Machine Learning to provide a truly personal assistant service unique to each user including predictive tips based on what you are likely to want to do / use next.
    • Siri will use Machine Learning to automatically memorise new words from the content you read (i.e. News) so these words are now included on the dictionary & predictive texts automatically if you want to type them

 

  • Use of Machine Learning in iOS 11 within the photo app to enable various new capabilities to make life easier with your photos
    • Next version of Apple Mac OS, code named High Sierra, will supports additional features on the photo app including advanced face recognition capabilities which utilises AI technologies such as Advanced convolution Neural networks in order to let you group / filter your photos actually based on who’s on them
    • Machine learning capabilities will also be used to automatically understand the context of each photo based on the content of the photo to identify photos from events such as sporting events, weddings…etc and automatically group them / create events / memories
    • Using computer vision capabilities to create seamless loops on live photos
    • Use of Machine Learning to activate palm rejection on the iPad during writing using the apple Pen
    • Most Machine Learning capabilities are now available for 3rd party programmers via the iOS API’s such as Vision API (enables iOS app developers harness machine learning for face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, image registration), Natural Language API (provides language identification, tokenization, lemmatisation, part of speech, named entity recognition)
    • Introduction of Machine Learning Model Converter, 3rd party ML contents can be converted to native iOS 11 Core ML functions.

 

  • Use of Machine Learning to improve graphics on iOS 11
    • Another Mac OS high sierra updates will include Metal 2 (the Apple API that provides app developers near direct access to the GPU capabilities) that will now integrate Machine Learning to graphic processing to provide advanced graphical capabilities such as Metal performance shaders, Recurrent neural network kernels, binary convolution, dilated convolution, L-2 norm pooling, Dilated pooling etc. (https://developer.apple.com/metal/)
    • Newly announced Mac Pro graphics powered by AMD Radeon Vega can provide up to 22 teraflops of half precision compute power which is specifically relevant for machine learning related content development

 

Use of Virtual Reality & Augmented Reality

  • Announcement on the introduction of Metal API for Virtual Reality to be used by developers – That includes Virtual Reality integration to Mac OS High Sierra Metal2 API to enable features such as VR-optimised display pipeline for video editing using VR and other related updates such as viewport arrays, system trace stereo timelines, GPU queue priorities, Frame debugger stereoscopic visualisation.
  • Availability of the ARKit for iOS 11 to create Augmented Reality straight out of iPhone using its camera and built in Machine Learning to identify contents on the live video, real time.

 

Use of IoT capabilities

  • Apple Watch integration for bi directional information synchronisation between Apple Watch and ordinary gym equipment so that your apple watch will now act as an IoT gateway to your typical gym equipment’s like the Treadmill or the cross trainer to provide more accurate measurements from the apple watch and the gym equipment will adjust the workouts based on those readings.
  • Apple watch OS 4 will also provide core Bluetooth connectivity to other devices such as various healthcare tools that open the connectivity of those devices through the Apple Watch

 

My thoughts

The use cases for digital technologies, such as AI, AR & VR in a typical corporate or an enterprise environment to create a better product / service / solution offering like Apple has used them, is immense and is often only limited by one’s level of creativity & imaginations. Many organisations around the world, from other tech or product vendors to Independent Software Vendors to an ordinary organisation like a high street shop or a supermarket can all benefit from the creative application of new digital technologies such as Artificial Intelligence, Augmented Reality and Internet Of Things in to their product / service / solution offerings to provide their customers with richer user experience as well as exciting new solutions. Topics like AI and AR are hot topics in the industry and some organisations are already evaluating the use of them while some already benefit from some of these technologies made easily accessible to the enterprise through platforms such as public cloud (Microsoft Cortana Analytics and Azure Machine Learning capabilities available on Microsoft Azure for example) platforms. But there are also a large number of organisations who are not yet fully investigating how these technologies can potentially make their business more innovative, differentiated or at the very least, more efficient.

If you belong to the latter group, I would highly encourage you to start thinking about how these technologies can be adopted by your business creatively. This applies to any organisation of any size in the increasingly digitally connected world of today. If you have a trusted partner for IT, I’d encourage you to talk to them about the same as the chances are that they will have more collective experience in helping similar businesses adopt such technologies which is more beneficial than trying to get their on your own, especially if you are new to it all.

Digital disruption is here to stay and Apple have just shown how advanced technologies that come out of digital disruption can be used to create better products / solutions for average customers. Pretty soon, AI / AR / IoT backed capabilities will become the norm rather than the exception and how would your business compete if you are not adequately prepared to embrace them?

Keen to get your thoughts?

 

You can watch the recorded version of the Apple WWDC 2017 event here.

Image credit goes to #Apple

 

#Apple #WWDC #2017 #AI #DeepLearning #MachineLearning #ComputerVision #IoT #AR #AugmentedReality #VR #VirtualReality #DigitalDisruption #Azure #Microsoft

 

VMware vSAN 6.6 Release – Whats New

VMware has just annouced the general availability of the latest version of vSAN which is the backbone of their native Hyper Converged Infrastructure offering with vSphere. vSAN has had a number of significant upgrades since its very first launch back in 2014 as version 5.5 (with vSphere 5.5) and each upgrade has added some very cool, innovative features to the solution which has driven the customer adoption of vSAN significantly. The latest version vSAN 6.6 is no different and by far it appears to be have the highest number of new features announced during an upgrade release.

Given below is a simple list of some of the key features of vSAN 6.6 which is the 6th generation of the products

Additional native security features

  • HW independent data at rest encryption (Software Defined Encryption)
    • Software Defined AES 256 encryption
    • Supported on all flash and hybrid
    • Data written already encrypted
    • KMS works with 3rd party KMS systems
  • Built-in compliance with dual factor authentication (RSA secure ID and Smart-card authentication)

Stretched clusters with local failure protection

With vSAN 6.6, if a site fails, surviving site will have local host and disk group protection still (not the case with the previous versions)

  • RAID 1 over RAID 1/5/6 is supported on All Flash vSAN only.
  • RAID 1 over RAID 1 is supported on Hybrid vSAN only

Proactive cloud analytics

This sounds like its kind of similar to Nimble’s cloud analytics platform which is popular with customers. With proactive cloud analytics, it uses data collected from VSAN support data globally to provide analytics through the vSAN health UI, along with some performance optimization advice for resolving performance issues.

Intelligent & Simpler operations

Simpler setup and post set up operations are achieved through a number of new features and capabilities. Some of the key features include,

  • Automated setup with 1 click installer & lifecycle management
  • Automated configuration & compliance checks for vSAN cluster (this was somewhat already available through vSAN health UI). Additions include,
    • Networking & cluster configurations assistance
    • New health checks for encryption, networking, iSCSI, re-sync operations
  • Automated controller firmware & driver upgrades
    • This automates the download and install of VMware supported drivers for various hard drives and RAID controllers (for the entire cluster) which is significantly important.
    • I think this is pretty key as the number of vSAN performance issues due to firmware mismatch (especially on Dell server HW) has been an issue for a while now.
  • Proactive data evacuation from failing drives
  • Rapid recovery with smart, efficient rebuild
  • Expanded Automation through vSAN SDK and PowerCLI

High availability

vSAN 6.6 now includes a highly available control plane which means the resilient management is now possible independent of vCenter.

Other key features

  • Increased performance
    • Optimized for latest flash technologies involving 1.6TB flash (Intel Optane drives anyone??)
    • Optimize performance with actionable insights
    • 30% faster sequential write performance
    • Optimized checksum and dedupe for flash
  • Certified file service and data protection (through 3rd party partners)
  • Native vRealize Operations integrations
  • Simple networking with Unicast
  • Real time support notification and recommendations
  • Simple vCenter install and upgrade
  • Support for Photon 1.1
  • Expanded caching tier choices

There you go. Another key set of features added to vSAN with the 6.6 upgrade which is great to see. If you are a VMware vSphere customer who’s looking at a storage refresh for your vSphere cluster or a new vSphere / Photon / VIC requirement, it would be silly not to look in to vSAN as opposed to looking at legacy hardware SAN technologies from a legacy vendor (unless you have non VMware requirements in the data center).

If you have any questions or thoughts, please feel free to comment / reach out

Additional details of whats new with VMware vSAN 6.6 is avaiable at https://blogs.vmware.com/virtualblocks/2017/04/11/whats-new-vmware-vsan-6-6/

Cheers

Chan

 

Impact from Public Cloud on the storage industry – An insight from SNIA at #SFD12

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective (You can read all about here).  While this was great, just before that session, we were treated to another similarly interesting session by SNIA – The Storage Networking Industry Association and I wanted to brief everyone on what I learnt from them during that session which I thought was very relevant to everyone who has a vested interest in field of IT today.

The presenters were Michael Oros, Executive Director at SNIA along with Mark Carlson who co-chairs the SNIA technical council.

Introduction to SNIA

SNIA is a non-profit organisation that was formed 20 years ago to deal with inter-operability challenges of network storage by various different tech vendors. Today there are over 160 active member organisations (tech vendors) who work together behind closed doors to set standards and improve inter-operability of their often competing tech solutions out in the real world. The alphabetical list of all SNIA members are available here and the list include key network and storage vendors such as Cisco, Broadcom, Brocade, Dell, Hitachi, HPe, IBM, Intel, Microsoft, NetApp, Samsung & VMware. Effectively, anyone using any and most of the enterprise datacenter technologies have likely benefited from SNIA defined industry standards and inter-operability

Some of the existing storage related initiatives SNIA are working on include the followings.

 

 

Hyperscaler (Public Cloud) Storage Platforms

According to SNIA, Public cloud platforms, AKA Hyperscalers such as AWS, Azure, Facebook, Google, Alibaba…etc are now starting to make an impact on how disk drives are being designed and manufactured, given their large consumption of storage drives and therefore the vast buying power. In order to understand the impact of this on the rest of the storage industry, lets clarify few key basic points first on these Hyperscaler cloud platforms first (for those didn’t know)

  • Public Cloud providers DO NOT buy enterprise hardware components like the average enterprise customer
    • They DO NOT buy enterprise storage systems (Sales people please read “no EMC, no NetApp, No HPe 3par…etc.”)
    • They DO NOT buy enterprise networking gear (Sales people please read “no Cisco switches, no Brocade switches, HPe switches…etc”.)
    • They DO NOT buy enterprise servers from server manufacturers (Sales people please read “no HPe/Dell/Cisco UCS servers…etc.)
  • They build most things in-house
    • Often this include servers, network switches…etc
  • They DO bulk buy disk drives direct from the disk manufacturers & uses home grown Software Defined Storage techniques to provision that storage.

Now if you think about it, large enterprise storage vendors like Dell and NetApp who normally bulk buy disk drives from manufacturers such as Samsung, Hitachi, Seagate…etc would have had a certain level of influence over how their drives are made given the economies of scale (bulk purchasing power) they had. However now, Public cloud providers who also bulk buy, often quantities far bigger than those said storage vendors would have, also become hugely influential over how these drives are made, to the level that their influence is exceeding that of those legacy storage vendors. This influence is growing such that they (Public Cloud providers) are now having a direct input towards the initial design of the said components (i.e disk drives…etc.) and how they are manufactured, simply due to the enormous bulk purchasing power as well as the ability they have to test drive performance at a scale that was not even possible by the drive manufacturers before,  given their global data center footprint.

 

Expanding on the focus these guys have on Software Defined storage technologies to aggregate all these disparate disk drives found in their servers in the data center is inevitably leading to various architectural changes in how the disk drives are required to be made going forward. For example, most legacy enterprise storage arrays would rely on the old RAID technology to rebuild data during drive failures and there are various background tasks implemented in the disk drive firmware such as ECC & IO re-try operations during failures which adds to the overall latency of the drive. However with modern SDS technologies (in use within Public Cloud platforms as well as some new enterprise SDS vendors tech), there are multiple copies of data held on multiple drives automatically as a part of the Software Defined Architecture (i.e. Erasure Coding) which means those specific background tasks on disk drives such as ECC, and re-try mechanism’s are no longer required.

For example, SNIA highlighted Eric Brewer, the VP of infrastructure of Google who talked about the key metrics for a modern disk drive to be,

  • IOPS
  • Capacity
  • Lower tail latency (long tail of latencies on a drive, arguably caused due to various background tasks, typically causes a 2-10x slower response time from a disk in a RAID group which causes a disk & SSD based RAID stripes to experience at least a single slow drive 1.5%-2.2% of the time)
  • Security
  • Lower TCO

So in a nutshell, Public cloud platform providers are now mandating various storage standards that disk drive manufacturers have to factor in to their drive design such that the drives are engineered from ground up to work with Software Defined architecture in use at these cloud provider platforms.  What this means most native disk firmware operations are now made redundant and instead the drive manufacturer provides an API’s through which cloud platform provider’s own software logic will control those background operations themselves based on their software defined storage architecture.

Some of the key results of this approach includes following architectural designs for Public Cloud storage drives,

  • Higher layer software handles data availability and is resilient to component failure so the drive firmware itself doesn’t have to.
    • Reduces latency
  • Custom Data Center monitoring (telemetry), and management (configuration) software monitors the hardware and software health of the storage infrastructure so the drive firmware doesn’t have to
    • The Data Center monitoring software may detect these slow drives and mark them as failed (ref Microsoft Azure) to eliminate the latency issue.
    • The Software Defined Storage then automatically finds new places to replicate the data and protection information that was on that drive
  • Primary model has been Direct Attached Storage (DAS) with CPU (memory, I/O) sized to the servicing needs of however many drives of what type can fit in a rack’s tray (or two) – See the OCP Honey Badger
  • With the advent of higher speed interfaces (PCI NVMe) SSDs are moving off of the motherboard onto an extended PCIe bus shared with multiple hosts and JBOF enclosure trays – See the OCP Lightning proposal
  • Remove the drives ability to schedule advanced background operations such as Garbage collection, Scrubbing, Remapping, Cache flushes, continuous self tests…etc on its own and allow the host to affect the scheduling of these latency increasing drive maintenance operations when it sees fit – effectively remove the drive control plane and move it up to the control of the Public Cloud platform (SAS = SBC-4 background Operation Control, SATA = ACS-4 advanced background operaitons feature set, NVMe = Available through NVMe sets)
    • Reduces unpredictable latency fluctuations & tail latency

The result of all these means Public Cloud platform providers such as Microsoft, Google, Amazon are now also involved at setting industry standards through organisations such as SNIA, a task previously only done by hardware manufacturers. An example is the DePop standard which is now approved at T13 which essentially defines a standard where the storage host will shrink the usable size of the drive by removing the poor performing (slow) physical elements such as drive sectors from the LBA address space rather than disk firmware. The most interesting part is that the drive manufacturers are now required to replace the drives when enough usable space has shrunk to match the capacity of a full drive, without necessarily having the old drive back (i.e. Public cloud providers only pay for usable capacity and any unusable capacity is replenished with new drives) which is a totally different operational and a commercial model to that of legacy storage vendors who consume drives from drive manufacturers.

Another concept that’s pioneered by the Public cloud providers is called Streams which maps lower level drive blocks with an upper level object such as a file that reside on it, in a way that all the blocks making the file object are stored contiguously. This simplifies the effect of a TRIM or a SCSI UNMAP command (executed when the file is deleted from the file system) which reduces delete penalty and causes lowest amount of damage to SSD drives, extending their durability.

 

Future proposals from Public Cloud platforms

SNIA also mentioned about future focus areas from these public cloud providers such as,

  • Hyperscalers (Google, Microsoft Azure, Facebook, Amazon) are trying to get SSD vendors to expose more information about internal organization of the drives
    • Goal to have 200 µs Read response and 99.9999% guarantee for NAND devices
  • I/O Determinism means the host can control order of writes and reads to the device to get predictable response times –
    • Window of reading – deterministic responses
    • Window of writing and background – non-deterministic responses
  • The birth of ODM – Original Design Manufacturers
    • There is a new category of storage vendors called Original Design Manufacturer (ODM) direct who package up best in class commodity storage devices into racks according to the customer specifications and who operate at much lower margins.
    • They may leverage hardware/software designs from the Open Compute Project (OCP) or a similar effort in China called Scorpio, now under an organization called the Open Data Center Committee (ODCC), as well from as other available hardware/software designs.
    • SNIA also mentioned about few examples of some large  global enterprise organisations such as a large bank taking the approach of using ODM’s to build a custom storage platform achieving over 50% cost savings over using traditional enterprise storage

 

My Thoughts

All of these Public Cloud platform introduced changes are set to collectively change the rest of the storage industry too and how they fundamentally operate which I believe would be good for the end customers. Public cloud providers are often software vendors who approaches every solution with a software centric solution and typically, would have highly cost efficient architecture of using cheapest commodity hardware with underpinned by intelligent software. This will likely re-shape the legacy storage industry too and we are already starting to see the early signs of this today through the sudden growth of enterprise focused Software Defined Storage vendors and legacy storage vendors struggling with their storage revenue. All public cloud computing and storage platforms are a continuous evolution for the cost efficiency and each of their innovation in how storage is designed, built & consumed will trickle down to the enterprise data centers in some shape or form to increase overall efficiencies which surely is only a good thing, at least in my view. And smart enterprise storage vendors that are software focused, will take note of such trends and adopt accordingly (i.e. SNIA mentioned that NetApp for example, implemented the Stream commands on the front end of their arrays to increase the life of the SSD media), where as legacy storage / hardware vendors who are effectively still hugging their tin, will likely find the future survival more than challenging.

Also, the concept of ODM’s really interest me and I can see the use of ODM’s increasing further as more and more customers will wake up to the fact that they have been overpaying for their storage for the past 10-20 years in the data center due to the historically exclusive capabilities within the storage industry. With more of a focus on a Software Defined approach, there are large cost savings to be had potentially through following the ODM approach, especially if you are an organisation of that size that would benefit from the substantial cost savings.

I would be glad to get your thoughts, through comments below

 

If you are interested in the full SNIA session, a recording of the video stream us available here and I’d recommend you watch it, especially if you are in the storage industry.

 

Thanks

Chan

P.S. Slide credit goes to SNIA and TFD

VMware & DataGravity Solution – Data Management For the Digital Enterprise

 

 

Yesterday, I had the priviledge to be invited to an exclusive VMware #vExpert only webinar oraganised by the vExpert community manager, Corey Romero and DataGravity, one of their partner ISV’s to get a closer look at the DataGravity solution and its integration with VMware.  My initial impression was that its a good solution and a good match with VMware technology too and I kinda like what I saw. So decided to post a quick post about it to share what I’ve learned.

DataGravity Introduction

DataGravity (DG from now on) solution appear to be all about data managament, and in perticular its about data management in a virtualised data center. In a nutshell, DG is all about providing a simple, virtualisation friendly data management solution that, amongst many other things, focuses on the following key requirements which are of primary importance to me.

  • Data awareness – Understand different types of data available within VMs, structured or unstructured along with various metadata about all data. It automatically keeps a track of data locations, status changes and various other metadata information about data including any sensitive contents (i.e. Credit card information) in the form of an easy to read, dashboard style interface
  • Data protection & security –  DG tracks sensitive data and provide a complete audit trail including access history helpo remediate any potential loss or compromise of data

DG solution is currently specific to VMware vSphere virtual datacenter platforms only and serves 4 key use cases as shown below

Talking about the data visulation itself, DG claim to provide a 360 degree view of all the data that reside within your virtualised datacenter (on VMware vSphere) and having see the UI on the live demo, I like that visualisation of it which very much resemblbes the interface of VMware’s own vrealise operations screen.

The unified, tile based view of all the data in your datacenter with vROPS like context aware UI makes navigating through the information about data pretty self explanatory.

Some of the information that DG automatically tracks on all the data that reside on the VMware datacenter include information as shown below

Some of the cool capabilities DG has when it comes to data protection itself include behaviour based data protection where it proactively monitor user and file activities and mitigates potential attacks through sensing anomolous behaviours and taking prevenetive measures such as orchestratin protection points, alerting administrators to even blocking user access automatically.

During a recovery scenario, DG claims to assemble the forensic information needed to perform a quick recovery such as cataloging files and incremental version information, user activity information and other key important meta data such as known good state of various files which enable the recovery with few clicks.

Some Details

During the presentaiton, Dave Stevens (Technical Evangelist) took all the vExperts through the DG solution in some detail and its integration with VMware vSphere which I intend to share below for the benefit of all others (sales people: feel free to skip this section and read the next).

The whole DG solution is deployed as a simple OVA in to vCenter and typically requires connecting the appliance to Microsoft Active Directory (for user access tracking) initially as a one off task. It will then perform an automated initial discovery of data and the important thing to note here is that it DOES NOT use an agent in each VM but simply uses the VMware VADP, or now known as vSphere Storage API to silently interrogate data that live inside the VMs in the data center with minimal overhead. Some of the specifics around the overhead around this are as follows

  • File indexing is done at a DiscoveryPoint (Snapshot) either on a schedule or user driven. (No impact to real-time there access from a performance point of view).
  • Real time access tracking overhead is minimal to non existent
    • Real-time user activity is 200k of memory
    • Network bandwidth about 50kbps per VM.
    • Less than 1% of CPU

From an integration point of view, while DG solution integrates with vSphere VM’s as above irrespective of the underlying storage platform, it also has the ability to integrate with specific storage vendors too (licensing prerequisites apply)

Once the data discovery is complete, further discoveries are done on an incremental basis and the management UI is a simple web interface which looks pretty neat.

Similar to VMware vROPS UI for example, the whole UI is context aware so depending on what object you select, you are presented with stats in the context of the selected object(s).

The usage tracking is quite granular and keeps a track of all types of user access for data in the inventory which is handy.


 

Searching for files is simple and you can also use tags to search using, which are simple binary expressions. Tags can be grouped together in to profiles too to search against which looks pretty simple and efficient.

I know I’ve mentioned this already but the simple, intuitive user interface makes consuming the information on the UI about all your data in  singple pane of glass manner looks very attractive.

Current Limitations

There are some current limitations to be aware of however and some of the important ones include,

  • Currently it doesn’t look inside structured data files (i.e. Database files for example)
    • Covers about 600 various file types
  • File content analytics is available for Windows VMs only at present
    • Linux may follow soon?
  • VMC (VMware Cloud on AWS) & VCF (Vmware Cloud Foundation) support is not there (yet)
    • Is this to be annouced during a potential big event?
  • No current availability on other public cloud platforms such as AWS or Azure (yet)

 

My Thoughts

I lilke the solution and its capabilities due to various reasons. Primarily its because the focus on data that reside in your data center is more important now that its ever been. Most organisaitons simply do not have a clue of the type of th data they hold in a datacenter, typically scattered around various server, systrems, applications etc, often duplicated and most importantly left untracked on their current relevence or even the actual usage as to who access what. Often, most data that is generated by an organisation serves its initial purpose after a certain intial period and that data is now simply just kept on the systems forever, intentionally or unintentionally. This is a costly exercise, especially on the storage front and you are typically filling your SAN storage with stale data. With a simple, yet intelligent data management solution like DG, you now have the ability to automatically track data and their ageing across the datacenter and use that awareness of your data to potentially move stale data on to a different tier, especially a cheaper tier such as a public cloud storage platform.

Furthermore, not having an understanding of data governance, especifically not monitoring the data access across the datacenter is another issue where many organisations do not collectively know what type of data is available where within the datacenter and how secure that data is including their access / usage history over their existence. Data security is probably the most important topic in the industry today as organisations are in creasingly becoming digital thanks to the Digital revelution / Digital Enterprise phoenomena (in other words, every organisation is now becoming digital) and a guranteed by product of this is more and more DATA being generated which often include all if not most of an organisations intelectual property. If theres no credible way of providing a data management solution focusing around security for such data, you are risking loosing the livelyhood of your organisation and its potential survival in a fiercely coimpetitive global economy.

It is important to note that some regulatory compliance has always enforced the use of data management & governance solutions such as DG tracking such information about data and their security for certain type of data platforms (i.e.  PCI for credit card information…etc). But the issue is no such requirement existed for all types of data that lives in your datacenter. This about to change, at least here in the Europe now thanks to the European GDPR (General Data Protection Regulation) which now legally oblighes every orgnisation to be able to provide auditeble history of all types of data that they hold and most organisations I know do not have a credible solution covering the whole datacenter to meet such demands rearding their data today.

A simple, easily integrateble solution that uses little overhead like DataGravity that, for the most part harness the capabilities of the underlying infrastructure to track and manage the data that lives on it could be extremely attractive to many customers. Most customers out there today use VMware vSphere as their preferred virtualisaiton platform and the obvious integration with vSphere will likely work in favour of DG. I have already signed up for a NFR download for me to have doiwnload and deploy this software in my own lab to understand in detail how things work in detail and I will aim to publish a detailed deepdive post on that soon. But in the meantime, I’d encourage anyone that runs a VMware vSphere based datacenter that is concerned about data management & security to check the DG solution out!!

Keen to get your thoughts if you are already using this in your organisation?

 

Cheers

Chan

Slide credit to VMware & DataGravity!

Storage Futures With Intel Software From #SFD12

 

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective. This was a great session that was presented by Jonathan Stern (Intel Solutions Architect /  and Tony Luck (Principle Engineer) and this post is to summarise few things I’ve learnt during those sessions that I thought were quite interesting for everyone. (prior to this session we also had a session from SNIA that was talking about future of storage industry standards but I think that deserves a dedicated post so I won’t mention those here – stay tuned for a SNIA event specific post soon!)

First session from Intel was on the future of storage by Jonathan. It’s probably fair to say Jonathan was by far the most engaging presenter out of all the SFD12 presenters and he covered somewhat of a deep dive on the Intel plans for storage, specifically on the software side of things and the main focus was around the Intel Storage Performance Development Kit (SPDK) which Intel seem to think is going to be a key part of the future of storage efficiency enhancements.

The second session with Tony was about Intel Resource Director Technology (addresses shared resource contention that happens inside an Intel processor in processor cache) which, in all honesty was not something most of us storage or infrastructure guys need to know in detail. So my post below is more focused on Jonathan’s session only.

Future Of Storage

As far as Intel is concerned, there are 3 key areas when it comes to the future of storage that need to be looked at carefully.

  • Hyper-Scale Cloud
  • Hyper-Convergence
  • Non-Volatile memory

To put this in to some context, see the below revenue projections from Wikibon Server SAN research project 2015 comparing the revenue projections for

  1. Traditional Enterprise storage such as SAN, NAS, DAS (Read “EMC, Dell, NetApp, HPe”)
  2. Enterprise server SAN storage (Read “Software Defined Storage OR Hyper-Converged with commodity hardware “)
  3. Hyperscale server SAN (Read “Public cloud”)

It is a known fact within the storage industry that public cloud storage platforms underpinned by cheap, commodity hardware and intelligent software provide users with an easy to consume, easily available and most importantly non-CAPEX storage platform that most legacy storage vendors find hard to compete with. As such, the net new growth in the global storage revenue as a whole from around 2012  has been predominantly within the public cloud (Hyperscaler) space while the rest of the storage market (non-public cloud enterprise storage) as a whole has somewhat stagnated.

This somewhat stagnated market was traditionally dominated by a few storage stalwarts such as EMC, NetApp, Dell, HPe…etc. However the rise of the server based SAN solutions where commodity servers with local drives combined with intelligent software to make a virtual SAN / storage pool (SDS/HCI technologies) has made matters worse for these legacy storage vendors and such storage solutions are projected to eat further in to the traditional enterprise storage landscape within next 4 years. This is already evident by the recent popularity & growth of such SDS/HCI solutions such as VMware VSAN, Nutanix, Scality, HedVig while at the same time, traditional storage vendors announcing reducing storage revenue. So much so that even some of the legacy enterprise storage vendors like EMC & HPe have come up with their own SDS / HCI offerings (EMC Vipr, HPe StoreVirtual, annoucement around SolidFire based HCI solution…etc.) or partnered up with SDS/HCI vendors (EMC VxRail, VxRail…etc.) to hedge their bets against a loosing back drop of traditional enterprise storage.

 

If you study the forecast in to the future, around 2020-2022, it is estimated that the traditional enterprise storage market revenue & market share will be even further squeezed by even more rapid  growth of the server based SAN solutions such as SDS and HCI solutions. (Good luck to legacy storage folks)

An estimate from EMC suggest that by 2020, all primary storage for production applications would sit on flash based drives, which precisely co-inside with the timelines in the above forecast where the growth of Enterprise server SAN storage is set to accelerate between 2019-2022. According to Intel, one of the main reasons behind this forecasted increase of revenue (growth) on the enterprise server SAN solutions is estimated to be the developments of Non-Volatile Memory (NVMe) based technologies which makes it possible achieve very  low latency through direct attached (read “locally attach”) NVMe drives along with clever & efficient software that are designed to harness this low latency. In other words, drop of latency when it comes to drive access will make Enterprise server SAN solutions more appealing to customers who will look at Software Defined, Hyper-Converged storage solutions in favour of external, array based storage solutions in to the immediate future and legacy storage market will continue to shrink further and further.

I can relate to this prediction somewhat as I work for a channel partner of most of these legacy storage vendors and I too have seen first hand the drop of legacy storage revenue from our own customers which reasonably backs this theory.

 

Challenges?

With the increasing push for Hyper-Convergence with data locality, the latency becomes an important consideration. As such, Intel’s (& the rest of the storage industry’s) main focus going in to the future is primarily around reducing the latency penalty applicable during a storage IO cycle, as much as possible. The imminent release of this next gen storage media from Intel as a better alternative to NAND (which comes with inherent challenges such as tail latency issues which are difficult to get around) was mentioned without any specific details. I’m sure that was a reference to the Intel 3D XPoint drives (Only just this week announced officially by Intel http://www.intel.com/content/www/us/en/solid-state-drives/optane-solid-state-drives-dc-p4800x-series.html) and based on the published stats, the projected drive latencies are in the region of < 10μs (sequential IO) and < 200μs (random IO) which is super impressive compared to today’s ordinary NVMe SSD drives that are NAND based. This however presents a concern as the current storage software stack that process the IO through the CPU via costly context switching also need to be optimised in order to truly benefit from this massive drop in drive latency. In other words, the level of dependency on the CPU for IO processing need to be removed or minimised through clever software optimisation (CPU has long been the main IO bottleneck due to how MSI-X interrupts are handled by the CPU during IO operations for example). Without this, the software induced latency would be much higher than the drive media latency during an IO processing cycle which will contribute to an overall higher latency still. (My friend & fellow #SFD12 delegate Glenn Dekhayser described this in his blog as “the media we’re working with now has become so responsive and performant that the storage doesn’t want to wait for the CPU anymore!” which is very true).

Furthermore,

Storage Performance Development Kit (SPDK)

Some companies such as Excelero are also addressing this CPU dependency of the IO processing software stack by using NVMe drives and clever software  to offload processing from CPU to NVMe drives through technologies such as RDDA (Refer to the post I did on how Excelero is getting around this CPU dependency by reprogramming the MSI-X interrupts to not go to the CPU). SPDK is Intel’s answer to this problem and where as Excelero’s RDDA architecture primarily avoid CPU dependency by bypassing CPU for IOs, Intel SPDK minimizes the impact on CPU & Memory bus cycles during IO processing by using the user-mode for storage applications rather than the kernel mode, thereby removing the need for costly context switching and the related interrupt handling overhead. According to http://www.spdk.io/, “The bedrock of the SPDK is a user space, polled mode, asynchronous, lockless NVMe driver that provides highly parallel access to an SSD from a user space application.”

With SPDK, Intel claims that you can reach up to around 3.6million IOPS per single Xeon CPU core before it ran out of PCI lane bandwidth which is pretty impressive. Below is a IO performance benchmark based on a simple test of CentOS Linux kernel IO performance (Running across 2 x Xeon E5-2965 2.10 GHz CPUs each with 18 cores + 1-8 x Intel P3700 NVMe SSD drives) Vs SPDK with a single dedicated 2.10 GHz core allocated out of the 2 x Xeon E5-2965  for IO. You can clearly see the significantly better IO performance with SPDK, which, despite having just a single core, due to the lack of context switching and the related overhead, is linearly scaling the IO throughput in line with the number of NVMe SSD drives.

(In addition to these testing, Jonathan also mentioned that they’ve done another test with Supermicro off the shelf HW and with SPDK & 2 dedicated cores for IO, they were able to get 5.6 million IOPS before running out of PCI bandwidth which was impressive)

 

SPDK Applications & My Thoughts

SPDK is an end-to-end reference storage architecture & a set of drivers (C libraries & executables) to be used by OEMs and ISV’s when integrating disk hardware. According to Intel’s SPDK introduction page, the goal of the SPDK is to highlight the outstanding efficiency and performance enabled by using Intel’s networking, processing and storage technologies together. SPDK is available freely as an open source product that is available to download through GitHub. It also provide NVMeF (NVMe Over Fabric) and iSCSI servers to be built using the SPDK architecture, on top of the user space drivers that are even capable of servicing disks over the network. Now this can potentially revolutionise how the storage industry build their next generation storage platforms.  Consider for example any SDS or even  a legacy SAN manufacturer using this architecture to optimise the CPU on their next generation All  Flash storage array? (Take NetApp All Flash FAS platform for example, they are known to have a ton of software based data management services available within OnTAP that are currently competing for CPU cycles with IO and often have to scale down data management tasks during heavy IO processing. With Intel DPDK architecture for example, OnTAP can free up more CPU cycles to be used by more data management services and even double up on various other additional services too without any impact on critical disk IO? I mean its all hypothetical of course as I’m just thinking out loud here. Of course it would require NetApp to run OnTAP on Intel CPUs and Intel NVMe drives…etc but it’s doable & makes sense right? I mean imagine the day where you can run “reallocate -p” during peak IO times without grinding the whole SAN to a halt? :-). I’m probably exaggerating its potential here but the point here though is that SDPK driven IO efficiencies can apply same to all storage array manufacturers (especially all flash arrays) where they can potentially start creating some super efficient, ultra low latency, NVMe drive based storage arrays and also include a ton of data management services that would have been previously too taxing on CPU (think inline de dupe, inline compression, inline encryption, everything inline…etc.) that’s on 24×7 by default, not just during off peak times due to zero impact on disk IO?

Another great place to apply SPDK is within virtualisation for VM IO efficiency. Using SPDK with QEMU as follows has resulted in some good IO performance to VM’s

 

I mean imagine for example, a VMware VSAN driver that was built using the Intel DPDK architecture running inside the user space using a dedicated CPU core that will perform all IO and what would be the possible IO performance? VMware currently performs IO virtualisation in kernel right now but imagine if SPDK was used and IO virtualisation for VSAN was changed to SW based, running inside the user-space, would it be worth the performance gain and reduced latency? (I did ask the question and Intel confirmed there are no joint engineering currently taking place on this front between 2 companies). What about other VSA based HCI solutions, especially take someone like Nutanix Acropolis where Nutanix can happily re-write the IO virtualisation to happen within user-space using SPDK for superior IO performance?

Intel & Alibaba cloud case study where the use of SPDK was benchmarked has given the below IOPS and latency improvements

NVMe over Fabric is also supported with SPDK and some use cases were discussed, specifically relating to virtualisation where VM’s tend of move between hosts and a unified NVMe-oF API that talk to local and remote NVMe drives being available now (some part of the SPDK stack becoming available in Q2 FY17)

Using the SPDK seems quite beneficial for existing NAND media based NVMe storage, but most importantly for newer generation non-NAND media to bring the total overall latency down. However that does mean changing the architecture significantly to process IO in user-mode as opposed to kernel-mode which I presume is how almost all storage systems, Software Defined or otherwise work and I am unsure whether changing them to be user-mode with SPDK is going to be a straight forward process. It would be good to see some joint engineering or other storage vendors evaluating the use of SPDK though to see if the said latency & IO improvements are realistic in complex storage solution systems.

I like the fact that Intel has made the SPDK OpenSource to encourage others to freely utilise (& contribute back to) the framework too but I guess what I’m not sure about is whether its tied to Intel NVMe drives & Intel processors.

If anyone wants to watch the recorded video of our session from # SFD12 the links are as follows

  1. Jonathan’s session on SPDK
  2. Tony’s session on RDT

Cheers

Chan

#SFD12 #TechFieldDay @IntelStorage