VMware vSAN 6.6 Release – Whats New

VMware has just annouced the general availability of the latest version of vSAN which is the backbone of their native Hyper Converged Infrastructure offering with vSphere. vSAN has had a number of significant upgrades since its very first launch back in 2014 as version 5.5 (with vSphere 5.5) and each upgrade has added some very cool, innovative features to the solution which has driven the customer adoption of vSAN significantly. The latest version vSAN 6.6 is no different and by far it appears to be have the highest number of new features announced during an upgrade release.

Given below is a simple list of some of the key features of vSAN 6.6 which is the 6th generation of the products

Additional native security features

  • HW independent data at rest encryption (Software Defined Encryption)
    • Software Defined AES 256 encryption
    • Supported on all flash and hybrid
    • Data written already encrypted
    • KMS works with 3rd party KMS systems
  • Built-in compliance with dual factor authentication (RSA secure ID and Smart-card authentication)

Stretched clusters with local failure protection

With vSAN 6.6, if a site fails, surviving site will have local host and disk group protection still (not the case with the previous versions)

  • RAID 1 over RAID 1/5/6 is supported on All Flash vSAN only.
  • RAID 1 over RAID 1 is supported on Hybrid vSAN only

Proactive cloud analytics

This sounds like its kind of similar to Nimble’s cloud analytics platform which is popular with customers. With proactive cloud analytics, it uses data collected from VSAN support data globally to provide analytics through the vSAN health UI, along with some performance optimization advice for resolving performance issues.

Intelligent & Simpler operations

Simpler setup and post set up operations are achieved through a number of new features and capabilities. Some of the key features include,

  • Automated setup with 1 click installer & lifecycle management
  • Automated configuration & compliance checks for vSAN cluster (this was somewhat already available through vSAN health UI). Additions include,
    • Networking & cluster configurations assistance
    • New health checks for encryption, networking, iSCSI, re-sync operations
  • Automated controller firmware & driver upgrades
    • This automates the download and install of VMware supported drivers for various hard drives and RAID controllers (for the entire cluster) which is significantly important.
    • I think this is pretty key as the number of vSAN performance issues due to firmware mismatch (especially on Dell server HW) has been an issue for a while now.
  • Proactive data evacuation from failing drives
  • Rapid recovery with smart, efficient rebuild
  • Expanded Automation through vSAN SDK and PowerCLI

High availability

vSAN 6.6 now includes a highly available control plane which means the resilient management is now possible independent of vCenter.

Other key features

  • Increased performance
    • Optimized for latest flash technologies involving 1.6TB flash (Intel Optane drives anyone??)
    • Optimize performance with actionable insights
    • 30% faster sequential write performance
    • Optimized checksum and dedupe for flash
  • Certified file service and data protection (through 3rd party partners)
  • Native vRealize Operations integrations
  • Simple networking with Unicast
  • Real time support notification and recommendations
  • Simple vCenter install and upgrade
  • Support for Photon 1.1
  • Expanded caching tier choices

There you go. Another key set of features added to vSAN with the 6.6 upgrade which is great to see. If you are a VMware vSphere customer who’s looking at a storage refresh for your vSphere cluster or a new vSphere / Photon / VIC requirement, it would be silly not to look in to vSAN as opposed to looking at legacy hardware SAN technologies from a legacy vendor (unless you have non VMware requirements in the data center).

If you have any questions or thoughts, please feel free to comment / reach out

Additional details of whats new with VMware vSAN 6.6 is avaiable at https://blogs.vmware.com/virtualblocks/2017/04/11/whats-new-vmware-vsan-6-6/

Cheers

Chan

 

VMware & DataGravity Solution – Data Management For the Digital Enterprise

 

 

Yesterday, I had the priviledge to be invited to an exclusive VMware #vExpert only webinar oraganised by the vExpert community manager, Corey Romero and DataGravity, one of their partner ISV’s to get a closer look at the DataGravity solution and its integration with VMware.  My initial impression was that its a good solution and a good match with VMware technology too and I kinda like what I saw. So decided to post a quick post about it to share what I’ve learned.

DataGravity Introduction

DataGravity (DG from now on) solution appear to be all about data managament, and in perticular its about data management in a virtualised data center. In a nutshell, DG is all about providing a simple, virtualisation friendly data management solution that, amongst many other things, focuses on the following key requirements which are of primary importance to me.

  • Data awareness – Understand different types of data available within VMs, structured or unstructured along with various metadata about all data. It automatically keeps a track of data locations, status changes and various other metadata information about data including any sensitive contents (i.e. Credit card information) in the form of an easy to read, dashboard style interface
  • Data protection & security –  DG tracks sensitive data and provide a complete audit trail including access history helpo remediate any potential loss or compromise of data

DG solution is currently specific to VMware vSphere virtual datacenter platforms only and serves 4 key use cases as shown below

Talking about the data visulation itself, DG claim to provide a 360 degree view of all the data that reside within your virtualised datacenter (on VMware vSphere) and having see the UI on the live demo, I like that visualisation of it which very much resemblbes the interface of VMware’s own vrealise operations screen.

The unified, tile based view of all the data in your datacenter with vROPS like context aware UI makes navigating through the information about data pretty self explanatory.

Some of the information that DG automatically tracks on all the data that reside on the VMware datacenter include information as shown below

Some of the cool capabilities DG has when it comes to data protection itself include behaviour based data protection where it proactively monitor user and file activities and mitigates potential attacks through sensing anomolous behaviours and taking prevenetive measures such as orchestratin protection points, alerting administrators to even blocking user access automatically.

During a recovery scenario, DG claims to assemble the forensic information needed to perform a quick recovery such as cataloging files and incremental version information, user activity information and other key important meta data such as known good state of various files which enable the recovery with few clicks.

Some Details

During the presentaiton, Dave Stevens (Technical Evangelist) took all the vExperts through the DG solution in some detail and its integration with VMware vSphere which I intend to share below for the benefit of all others (sales people: feel free to skip this section and read the next).

The whole DG solution is deployed as a simple OVA in to vCenter and typically requires connecting the appliance to Microsoft Active Directory (for user access tracking) initially as a one off task. It will then perform an automated initial discovery of data and the important thing to note here is that it DOES NOT use an agent in each VM but simply uses the VMware VADP, or now known as vSphere Storage API to silently interrogate data that live inside the VMs in the data center with minimal overhead. Some of the specifics around the overhead around this are as follows

  • File indexing is done at a DiscoveryPoint (Snapshot) either on a schedule or user driven. (No impact to real-time there access from a performance point of view).
  • Real time access tracking overhead is minimal to non existent
    • Real-time user activity is 200k of memory
    • Network bandwidth about 50kbps per VM.
    • Less than 1% of CPU

From an integration point of view, while DG solution integrates with vSphere VM’s as above irrespective of the underlying storage platform, it also has the ability to integrate with specific storage vendors too (licensing prerequisites apply)

Once the data discovery is complete, further discoveries are done on an incremental basis and the management UI is a simple web interface which looks pretty neat.

Similar to VMware vROPS UI for example, the whole UI is context aware so depending on what object you select, you are presented with stats in the context of the selected object(s).

The usage tracking is quite granular and keeps a track of all types of user access for data in the inventory which is handy.


 

Searching for files is simple and you can also use tags to search using, which are simple binary expressions. Tags can be grouped together in to profiles too to search against which looks pretty simple and efficient.

I know I’ve mentioned this already but the simple, intuitive user interface makes consuming the information on the UI about all your data in  singple pane of glass manner looks very attractive.

Current Limitations

There are some current limitations to be aware of however and some of the important ones include,

  • Currently it doesn’t look inside structured data files (i.e. Database files for example)
    • Covers about 600 various file types
  • File content analytics is available for Windows VMs only at present
    • Linux may follow soon?
  • VMC (VMware Cloud on AWS) & VCF (Vmware Cloud Foundation) support is not there (yet)
    • Is this to be annouced during a potential big event?
  • No current availability on other public cloud platforms such as AWS or Azure (yet)

 

My Thoughts

I lilke the solution and its capabilities due to various reasons. Primarily its because the focus on data that reside in your data center is more important now that its ever been. Most organisaitons simply do not have a clue of the type of th data they hold in a datacenter, typically scattered around various server, systrems, applications etc, often duplicated and most importantly left untracked on their current relevence or even the actual usage as to who access what. Often, most data that is generated by an organisation serves its initial purpose after a certain intial period and that data is now simply just kept on the systems forever, intentionally or unintentionally. This is a costly exercise, especially on the storage front and you are typically filling your SAN storage with stale data. With a simple, yet intelligent data management solution like DG, you now have the ability to automatically track data and their ageing across the datacenter and use that awareness of your data to potentially move stale data on to a different tier, especially a cheaper tier such as a public cloud storage platform.

Furthermore, not having an understanding of data governance, especifically not monitoring the data access across the datacenter is another issue where many organisations do not collectively know what type of data is available where within the datacenter and how secure that data is including their access / usage history over their existence. Data security is probably the most important topic in the industry today as organisations are in creasingly becoming digital thanks to the Digital revelution / Digital Enterprise phoenomena (in other words, every organisation is now becoming digital) and a guranteed by product of this is more and more DATA being generated which often include all if not most of an organisations intelectual property. If theres no credible way of providing a data management solution focusing around security for such data, you are risking loosing the livelyhood of your organisation and its potential survival in a fiercely coimpetitive global economy.

It is important to note that some regulatory compliance has always enforced the use of data management & governance solutions such as DG tracking such information about data and their security for certain type of data platforms (i.e.  PCI for credit card information…etc). But the issue is no such requirement existed for all types of data that lives in your datacenter. This about to change, at least here in the Europe now thanks to the European GDPR (General Data Protection Regulation) which now legally oblighes every orgnisation to be able to provide auditeble history of all types of data that they hold and most organisations I know do not have a credible solution covering the whole datacenter to meet such demands rearding their data today.

A simple, easily integrateble solution that uses little overhead like DataGravity that, for the most part harness the capabilities of the underlying infrastructure to track and manage the data that lives on it could be extremely attractive to many customers. Most customers out there today use VMware vSphere as their preferred virtualisaiton platform and the obvious integration with vSphere will likely work in favour of DG. I have already signed up for a NFR download for me to have doiwnload and deploy this software in my own lab to understand in detail how things work in detail and I will aim to publish a detailed deepdive post on that soon. But in the meantime, I’d encourage anyone that runs a VMware vSphere based datacenter that is concerned about data management & security to check the DG solution out!!

Keen to get your thoughts if you are already using this in your organisation?

 

Cheers

Chan

Slide credit to VMware & DataGravity!

Storage Futures With Intel Software From #SFD12

 

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective. This was a great session that was presented by Jonathan Stern (Intel Solutions Architect /  and Tony Luck (Principle Engineer) and this post is to summarise few things I’ve learnt during those sessions that I thought were quite interesting for everyone. (prior to this session we also had a session from SNIA that was talking about future of storage industry standards but I think that deserves a dedicated post so I won’t mention those here – stay tuned for a SNIA event specific post soon!)

First session from Intel was on the future of storage by Jonathan. It’s probably fair to say Jonathan was by far the most engaging presenter out of all the SFD12 presenters and he covered somewhat of a deep dive on the Intel plans for storage, specifically on the software side of things and the main focus was around the Intel Storage Performance Development Kit (SPDK) which Intel seem to think is going to be a key part of the future of storage efficiency enhancements.

The second session with Tony was about Intel Resource Director Technology (addresses shared resource contention that happens inside an Intel processor in processor cache) which, in all honesty was not something most of us storage or infrastructure guys need to know in detail. So my post below is more focused on Jonathan’s session only.

Future Of Storage

As far as Intel is concerned, there are 3 key areas when it comes to the future of storage that need to be looked at carefully.

  • Hyper-Scale Cloud
  • Hyper-Convergence
  • Non-Volatile memory

To put this in to some context, see the below revenue projections from Wikibon Server SAN research project 2015 comparing the revenue projections for

  1. Traditional Enterprise storage such as SAN, NAS, DAS (Read “EMC, Dell, NetApp, HPe”)
  2. Enterprise server SAN storage (Read “Software Defined Storage OR Hyper-Converged with commodity hardware “)
  3. Hyperscale server SAN (Read “Public cloud”)

It is a known fact within the storage industry that public cloud storage platforms underpinned by cheap, commodity hardware and intelligent software provide users with an easy to consume, easily available and most importantly non-CAPEX storage platform that most legacy storage vendors find hard to compete with. As such, the net new growth in the global storage revenue as a whole from around 2012  has been predominantly within the public cloud (Hyperscaler) space while the rest of the storage market (non-public cloud enterprise storage) as a whole has somewhat stagnated.

This somewhat stagnated market was traditionally dominated by a few storage stalwarts such as EMC, NetApp, Dell, HPe…etc. However the rise of the server based SAN solutions where commodity servers with local drives combined with intelligent software to make a virtual SAN / storage pool (SDS/HCI technologies) has made matters worse for these legacy storage vendors and such storage solutions are projected to eat further in to the traditional enterprise storage landscape within next 4 years. This is already evident by the recent popularity & growth of such SDS/HCI solutions such as VMware VSAN, Nutanix, Scality, HedVig while at the same time, traditional storage vendors announcing reducing storage revenue. So much so that even some of the legacy enterprise storage vendors like EMC & HPe have come up with their own SDS / HCI offerings (EMC Vipr, HPe StoreVirtual, annoucement around SolidFire based HCI solution…etc.) or partnered up with SDS/HCI vendors (EMC VxRail, VxRail…etc.) to hedge their bets against a loosing back drop of traditional enterprise storage.

 

If you study the forecast in to the future, around 2020-2022, it is estimated that the traditional enterprise storage market revenue & market share will be even further squeezed by even more rapid  growth of the server based SAN solutions such as SDS and HCI solutions. (Good luck to legacy storage folks)

An estimate from EMC suggest that by 2020, all primary storage for production applications would sit on flash based drives, which precisely co-inside with the timelines in the above forecast where the growth of Enterprise server SAN storage is set to accelerate between 2019-2022. According to Intel, one of the main reasons behind this forecasted increase of revenue (growth) on the enterprise server SAN solutions is estimated to be the developments of Non-Volatile Memory (NVMe) based technologies which makes it possible achieve very  low latency through direct attached (read “locally attach”) NVMe drives along with clever & efficient software that are designed to harness this low latency. In other words, drop of latency when it comes to drive access will make Enterprise server SAN solutions more appealing to customers who will look at Software Defined, Hyper-Converged storage solutions in favour of external, array based storage solutions in to the immediate future and legacy storage market will continue to shrink further and further.

I can relate to this prediction somewhat as I work for a channel partner of most of these legacy storage vendors and I too have seen first hand the drop of legacy storage revenue from our own customers which reasonably backs this theory.

 

Challenges?

With the increasing push for Hyper-Convergence with data locality, the latency becomes an important consideration. As such, Intel’s (& the rest of the storage industry’s) main focus going in to the future is primarily around reducing the latency penalty applicable during a storage IO cycle, as much as possible. The imminent release of this next gen storage media from Intel as a better alternative to NAND (which comes with inherent challenges such as tail latency issues which are difficult to get around) was mentioned without any specific details. I’m sure that was a reference to the Intel 3D XPoint drives (Only just this week announced officially by Intel http://www.intel.com/content/www/us/en/solid-state-drives/optane-solid-state-drives-dc-p4800x-series.html) and based on the published stats, the projected drive latencies are in the region of < 10μs (sequential IO) and < 200μs (random IO) which is super impressive compared to today’s ordinary NVMe SSD drives that are NAND based. This however presents a concern as the current storage software stack that process the IO through the CPU via costly context switching also need to be optimised in order to truly benefit from this massive drop in drive latency. In other words, the level of dependency on the CPU for IO processing need to be removed or minimised through clever software optimisation (CPU has long been the main IO bottleneck due to how MSI-X interrupts are handled by the CPU during IO operations for example). Without this, the software induced latency would be much higher than the drive media latency during an IO processing cycle which will contribute to an overall higher latency still. (My friend & fellow #SFD12 delegate Glenn Dekhayser described this in his blog as “the media we’re working with now has become so responsive and performant that the storage doesn’t want to wait for the CPU anymore!” which is very true).

Furthermore,

Storage Performance Development Kit (SPDK)

Some companies such as Excelero are also addressing this CPU dependency of the IO processing software stack by using NVMe drives and clever software  to offload processing from CPU to NVMe drives through technologies such as RDDA (Refer to the post I did on how Excelero is getting around this CPU dependency by reprogramming the MSI-X interrupts to not go to the CPU). SPDK is Intel’s answer to this problem and where as Excelero’s RDDA architecture primarily avoid CPU dependency by bypassing CPU for IOs, Intel SPDK minimizes the impact on CPU & Memory bus cycles during IO processing by using the user-mode for storage applications rather than the kernel mode, thereby removing the need for costly context switching and the related interrupt handling overhead. According to http://www.spdk.io/, “The bedrock of the SPDK is a user space, polled mode, asynchronous, lockless NVMe driver that provides highly parallel access to an SSD from a user space application.”

With SPDK, Intel claims that you can reach up to around 3.6million IOPS per single Xeon CPU core before it ran out of PCI lane bandwidth which is pretty impressive. Below is a IO performance benchmark based on a simple test of CentOS Linux kernel IO performance (Running across 2 x Xeon E5-2965 2.10 GHz CPUs each with 18 cores + 1-8 x Intel P3700 NVMe SSD drives) Vs SPDK with a single dedicated 2.10 GHz core allocated out of the 2 x Xeon E5-2965  for IO. You can clearly see the significantly better IO performance with SPDK, which, despite having just a single core, due to the lack of context switching and the related overhead, is linearly scaling the IO throughput in line with the number of NVMe SSD drives.

(In addition to these testing, Jonathan also mentioned that they’ve done another test with Supermicro off the shelf HW and with SPDK & 2 dedicated cores for IO, they were able to get 5.6 million IOPS before running out of PCI bandwidth which was impressive)

 

SPDK Applications & My Thoughts

SPDK is an end-to-end reference storage architecture & a set of drivers (C libraries & executables) to be used by OEMs and ISV’s when integrating disk hardware. According to Intel’s SPDK introduction page, the goal of the SPDK is to highlight the outstanding efficiency and performance enabled by using Intel’s networking, processing and storage technologies together. SPDK is available freely as an open source product that is available to download through GitHub. It also provide NVMeF (NVMe Over Fabric) and iSCSI servers to be built using the SPDK architecture, on top of the user space drivers that are even capable of servicing disks over the network. Now this can potentially revolutionise how the storage industry build their next generation storage platforms.  Consider for example any SDS or even  a legacy SAN manufacturer using this architecture to optimise the CPU on their next generation All  Flash storage array? (Take NetApp All Flash FAS platform for example, they are known to have a ton of software based data management services available within OnTAP that are currently competing for CPU cycles with IO and often have to scale down data management tasks during heavy IO processing. With Intel DPDK architecture for example, OnTAP can free up more CPU cycles to be used by more data management services and even double up on various other additional services too without any impact on critical disk IO? I mean its all hypothetical of course as I’m just thinking out loud here. Of course it would require NetApp to run OnTAP on Intel CPUs and Intel NVMe drives…etc but it’s doable & makes sense right? I mean imagine the day where you can run “reallocate -p” during peak IO times without grinding the whole SAN to a halt? :-). I’m probably exaggerating its potential here but the point here though is that SDPK driven IO efficiencies can apply same to all storage array manufacturers (especially all flash arrays) where they can potentially start creating some super efficient, ultra low latency, NVMe drive based storage arrays and also include a ton of data management services that would have been previously too taxing on CPU (think inline de dupe, inline compression, inline encryption, everything inline…etc.) that’s on 24×7 by default, not just during off peak times due to zero impact on disk IO?

Another great place to apply SPDK is within virtualisation for VM IO efficiency. Using SPDK with QEMU as follows has resulted in some good IO performance to VM’s

 

I mean imagine for example, a VMware VSAN driver that was built using the Intel DPDK architecture running inside the user space using a dedicated CPU core that will perform all IO and what would be the possible IO performance? VMware currently performs IO virtualisation in kernel right now but imagine if SPDK was used and IO virtualisation for VSAN was changed to SW based, running inside the user-space, would it be worth the performance gain and reduced latency? (I did ask the question and Intel confirmed there are no joint engineering currently taking place on this front between 2 companies). What about other VSA based HCI solutions, especially take someone like Nutanix Acropolis where Nutanix can happily re-write the IO virtualisation to happen within user-space using SPDK for superior IO performance?

Intel & Alibaba cloud case study where the use of SPDK was benchmarked has given the below IOPS and latency improvements

NVMe over Fabric is also supported with SPDK and some use cases were discussed, specifically relating to virtualisation where VM’s tend of move between hosts and a unified NVMe-oF API that talk to local and remote NVMe drives being available now (some part of the SPDK stack becoming available in Q2 FY17)

Using the SPDK seems quite beneficial for existing NAND media based NVMe storage, but most importantly for newer generation non-NAND media to bring the total overall latency down. However that does mean changing the architecture significantly to process IO in user-mode as opposed to kernel-mode which I presume is how almost all storage systems, Software Defined or otherwise work and I am unsure whether changing them to be user-mode with SPDK is going to be a straight forward process. It would be good to see some joint engineering or other storage vendors evaluating the use of SPDK though to see if the said latency & IO improvements are realistic in complex storage solution systems.

I like the fact that Intel has made the SPDK OpenSource to encourage others to freely utilise (& contribute back to) the framework too but I guess what I’m not sure about is whether its tied to Intel NVMe drives & Intel processors.

If anyone wants to watch the recorded video of our session from # SFD12 the links are as follows

  1. Jonathan’s session on SPDK
  2. Tony’s session on RDT

Cheers

Chan

#SFD12 #TechFieldDay @IntelStorage

VMware vExperts 2017 Annouced!

The latest batch of VMware vExperts in 2017 was announced last week on the 8th of February and I’m glad to say I’ve made the cut for the 3rd year which was fantastic news personally. The vExpert programme is VMware’s global evangelism and advocacy programme and is held in high regards within the community due to the expertise of the selected vExperts and their contribution towards enabling and empowering customers around the world with their virtualisation and software defined datacentre projects through knowledge sharing. The candidates are judged on their contribution to the community through activities such as community blogs, personal blogs, participation of events, producing tools…etc.. and in general, maintaining their expertise in related subject matters. vExperts typically get access to private betas, free licenses, early access product briefings, exclusive events, free access to VMworld conference materials, and other opportunities to directly interact with VMware product teams which is totally awesome and in return, help us to feed the information back to our customers…

Its been a great honour to have been recognised by VMware again for this prestigious title and I’d like to thank VMware as well as congratulate the other fellow vExperts who have also made it this year. Lets keep up the good work…!!

The full list of VMware vExperts 2017 can be found below

https://communities.vmware.com/vexpert.jspa

My vExpert profile link is below

https://communities.vmware.com/docs/DOC-31313

Cheers

Chan

 

New Dedicated VSAN Management Plugin For vROps Released

Some of you may have seen the tweets and the article from legendary Duncan Epping here about the release of the new VMware VSAN plugin for vROPS (vRealize Operations Management Pack for vSAN version 1.0)

If you’ve ever had the previous VSAN plugin for vROps deployed, you might know that it was not a dedicated plugin for VSAN alone, but was a vRealize Operations Management Pack for Storage Devices as a whole which included not just the visibility in to VSAN but also legacy storage stats such as FC, iSCSI and NFS for legacy storage units (that used to connect to Cisco DCNM or Brocade Fabric switches).

This vROps plugin for vSAN  however is the first dedicated plugin for VSAN (hence the version 1.0) on vROps. According to the documentation it has the following features

  • Discovers vSAN disk groups in a vSAN datastore.
  • Identifies the vSAN-enabled cluster compute resource, host system, and datastore objects in a vCenter Server system.
  • Automatically adds related vCenter Server components that are in the monitoring state.

How to Install / Upgrade from the previous MPSD plugin

  1. Download the management pack (.pak file)
    1. https://solutionexchange.vmware.com/store/products/vmware-vrealize-operations-management-pack-for-vsan
  2. Login to the vROps instance as the administrator / with administrative privileges and go to Administration -> Solutions
  3. Click add (plus sign) and select the .Pak file and select the 2 check boxes to replace if already installed and reset default content. Accept any warnings and click upload.
  4. Once the upload is complete and staged, verify the signature validity and click next to proceed               
  5. Click next and accept the EULA and proceed. The management plugin will start to install.
  6. Now select the newly installed management plugin for VSAN and click configure. Within this window, connect to the vCenter server (cannot use previously configured credentials for MPSD). When creating the credentials, you need to specify an admin account for the vCenter instance. Connection can be verified using the test button.  
  7. Once connected, wait for the data collection from VSAN cluster to complete and verify collection is showing
  8. Go to Home and verify that the VSAN dedicated dashboard items are now available on vROps               
  9. By Default there will be 3 VSAN specific dashboard available now as follows under default dashboards
    1. vSAN Environment Overview – This section provide some vital high level information on the vSAN cluster including its type, total capacity, used, any congestion if available, and average latency figures along with any active alerts on the VSAN cluster. As you can see I have a number of alerts due to using non-compliant hardware in my VSAN cluster.   
    2. vSAN Performance
      1. This default dashboard provide various performance related information / stats for the vSAN cluster rand datastores as well as the VM’s residing on it. You can also check performance such as VM latency and IOPS levels based on the VM’s you select on the tile view and the trend forecast which is think is going to be real handy.    
      2. Similarly, you can see performance at vSAN disk group level also which shows information such as Write buffer performance or Reach cache performance levels, current as well as future forecasted levels which are new and were not previously accessible easily.
      3. You can also view the performance at ESXi host level which shows the basic information such as current CPU utilisation as well as RAM including current and future (forecast) trend lines in true vROps style which are going to be really well received. Expect the content available on this ppage to be significantly extended in the future iterations of this mgmt. pack.  
    3. Optimize vSAN Deployments – This page provide a high level comparison of vSAN and non vSAN enviorments which would be especially handy if you have vSAN datastores alongside traditional iSCSI or NFS data stores to see how for example, IOPS and latency compares between VM’s on VSAN and an NFS datastore presented to the same ESXi server (I have both)    
  10. Under Environment -> vSAN and Storage Devices, additional vSAN hierarchy information such as vSAN enabled clusters, Fault domains (if relevant), Disk groups and Witness hosts (if applicable) are now visible for monitoring which is real handy.                                                                        
  11. In the inventory explorer, you can see the list of vSAN inventory items that the data are being collected for.   

All in all, this is a welcome addition and will only continue to be improved and new monitoring features added as we go up the versions. I realy like the dedicated plugin factor as well as the nice default dashboards included with this version which no doubt will help customers truly use vROps as a single pane of glass for all things monitoring on the SDDC including VSAN.

Cheers

Chan

VMware Storage and Availability Technical Documents Hub

homepage

This was something I came across accidentally so thought it may be worth a very brief post about as I found some useful content there.

VMware Storage and Availability Technical Documents Hub, is an online repository of technical documents and “how to” guides including video documents for all storage and availability products within VMware. Namely, it has some very useful contents for 4 VMware product categories (as of now)

  • VSAN
  • SRM
  • Virtual Volumes
  • vSphere Replication

For example, under the VSAN section, there are a whole heap of VSAN 6.5 contents such as technical information on what’s new with VSAN 6.5, how to design and deploy VSAN 6.5…etc as well as some handy video’s on how to configure some of those too. There also seem to be some advanced technical documentation around VSAN caching algorithms…etc & some deployment guides which I though was quite handy.

vsan

Similarly there are some good technical documentation around vVols including overview, how to set up and implement VVols…etc.. However in comparison, the content is a little light for the others compared to VSAN, but I’m sure more content will be added as the portal gets developed further.

All the information are presented in HTML5 interface which is easy to navigate with handy option to print to PDF option on all pages if you wanna download the content as a PDF for offline reading which is cool.

I’d recommend you to check this documentation hub, especially if you use any storage solution from VMware like VSAN and would like to see most of the relevant technical documentation all in a single place.

Cheers

Chan

VSAN, NSX on Cisco Nexus, vSphere Containers, NSX Future & a chat with VMware CEO – Highlights Of My Day 2 at VMworld 2016 US

In this post,  I will aim to highlight the various breakout sessions I’ve attended during the day 2 at VMworld 2016 US, key items / notes / points learnt and few other interesting things I was privy to  during the day that is worth mentioning, along with my thoughts on them…!!

Day 2 – Breakout Session 1 – Understanding the availability features of VSAN

vsan-net-deploy-support

  • Session ID: STO8179R
  • Presenters:
    • GS Khalsa – Sr. Technical Marketing manager – VMware (@gurusimran)
    • Jeff Hunter – Staff Technical Marketing Architect – VMware (@Jhuntervmware)

In all honesty, I wasn’t quite sure why I signed up to this breakout session as I know VSAN fairly well, including its various availability features as I’ve been working with testing & analysing its architecture and performance when VSAN was first launched to then designing and deploying VSAN solutions on behalf of my customers for a while. However, having attended the session it reminded me of a key fact that I normally try to never forget which is “you always learn something new” even when you think you know most of it.

Anyways, about the session itself, it was good and was mainly aimed at the beginners to VSAN but I did manage to learn few new things as well as refresh my memory on few other facts, regarding VSAN architecture. The key new ones I learnt are as follows

  • VSAN component statuses (as shown within vSphere Web Client) and their meanings
    • Absent
      • This means VSAN things the said component will probably return. Examples are,
        • Host rebooted
        • Disk pulled
        • NW partition
        • Rebuild starts after 60 mins
      • When an item is detected / marked as absent, VSNA typically wait for 60 minutes before a rebuild is started in order to allow temporary failure to rectify itself
        • This means for example, pulling disks out of VSAN will NOT trigger an instant rebuild / secondary copy…etc. so it wont be an accurate test of VSAN
    • Degraded
      • This typically means the device / component is unlikely to return. Examples include,
        • A permeant Device Loss (PDL) or a failed disk
      • When a degraded item is noted, a rebuild started immediately
    • Active-Stale
      • This means the device is back online from a failure (i.e. was absent) but the data residing on it are NOT up to date.
  • VSAN drive degradation monitoring is proactively logged in the following log files
    • vmkernel.log indicating LSOM errors
  • Dedupe and Compression during drive failures
    • During a drive failure, de-duplication and compression (al flash only) is automatically disabled – I didn’t know this before

 

Day 2 – Breakout Session 2 – How to deploy VMware NSX with Cisco Nexus / UCS Infrastructure

  • Session ID: NET8364R
  • Presenters:
    • Paul Mancuso – Technical Product Manager (VMware)
    • Ron Fuller – Staff System Engineer (VMware)

This session was about a deployment architecture for NSX which is becoming increasingly popular, which is about how to design & deploy NSX on top of Cisco Nexus switches with ACI as the underlay network and Cisco UCS hardware. Pretty awesome session and a really popular combination too. (FYI – I’ve been touting that both these solutions are better together since about 2 years back and its really good to see both companies recognising this and now working together on providing guidance stuff like these). Outside of this session I also found out that the Nexus 9k switches will soon have the OVS DB support so that they can be used as TOR switches too with NSX (hardware VTEP to bridge VXLANs to VLANs to communication with physical world), much like the Arista switches with NSX – great great news for the customers indeed.

ACI&NSX-2

I’m not going to summarise the content of this session but wold instead like to point people at the following 2 documentation sets from VMware which covers everything that this session was based on, its content and pretty simply, everything you need to know when designing NSX solutions together with Cisco ACI using Nexus 9K switches and Cisco UCS server hardware (blades & rack mounts)

One important thing to keep in mind for all Cisco folks though: Cisco N1K is NOT supported for NSX. All NSX prepped clusters must use vDS. I’m guessing this is very much expected and probably only a commercial decision rather than a technical one.

Personally I am super excited to see VMware ands Cisco are working together again (at least on the outset) when it comes to networking and both companies finally have realised the use cases of ACI and NSX are somewhat complementary to each other (i.e. ACI cannot do most of the clever features NSX is able to deliver in the virtual world, including public clouds and NSX cannot do any of the clever features ACI can offer to a physical fabric). So watch this space for more key joint announcements from both companies…!!

Day 2 – Breakout Session 3 – Containers for the vSphere admin

Capture

  • Session ID: CNA7522
  • Presenters:
    • Ryan Kelly – Staff System Engineer (VMware)

A session about how VMware approaches the massive buzz around containerisation through their own vSphere integrated solution (VIC) as well as a brand new hypervisor system designed from ground up with containerisation in mind (Photon platform). This was more of a refresher session for than anything else and I’m not going to summarise all of it but instead, will point you to the dedicated post I’ve written about VMware’s container approach here.

Day 2 – Breakout Session 4 – The architectural future of Network Virtualisation

the-vision-for-the-future-of-network-virtualization-with-vmware-nsx-27-638

  • Session ID: NET8193R
    Presenters: Bruce Davie – CTO, Networking (VMware)

Probably the most inspiring session of the day 2 as Bruce went through the architectural future of NSX where he described what the NSX team within VMware are focusing on as key improvements & advancements of the NSX platform. The summary of the session is as follows

  • NSX is the bridge from solving today’s requirement to solving tomorrow’s IT requirements
    • Brings remote networking closer easily (i.e. Stretched L2)
    • Programtically (read automatically) provisoned on application demand
    • Security ingrained at a kernel level and every hop outwards from the applications
  • Challenges NSX is trying address (future)
    • Developers – Need to rapidly provision and destroy complex networks as a pre-reqs for applications demanded by developers
    • Micro services – Container networking ands security
    • Containers
    • Unseen future requirements
  • Current NSX Architecture
    • Cloud consumption plane
    • Management plane
    • Control plane
    • Data plane
  • Future Architecture – This is what the NSX team is currently looking at for NSX’s future.
    • Management plane scale out
      • Management plane now needs to be highly available in order to constantly keep taking large number of API calls for action from cloud consumption systems such as OpenStack, vRA..etc – Developer and agile development driven workflows….etc.
      • Using & scaling persistent memory for the NSX management layer is also being considered – This is to keep API requests in persistent memory in a scalable way providing write and read scalability & Durability
      • Being able to take consistent NSX snapshots – Point in time backups
      • Distributed log capability is going to be key in providing this management plane scale out whereby distributed logs that store all the API requests coming from Cloud Consumption Systems will be synchronously stored across multiple nodes providing up to date visibility of the complete state across to all nodes, while also increasing performance due to management node scale out
    • Control plane evolution
      • Heterogeneity
        • Currently vSphere & KVM
        • Hyper-V support coming
        • Control plane will be split in to 2 layers
          • Central control plane
          • Local control plane
            • Data plane (Hyper-V, vSphere, KVM) specific intelligence
    • High performance data plane
      • Use the Intel DPDK – A technology that optimize packet processing in Intel CPU
        • Packet switching using x86 chips will be the main focus going forward and new technologies such as DPDK will only make this better and better
        • DPDK capacities are best placed to optimise iterative processing rather than too many context switching
        • NSX has these optimisation code built in to its components
          • Use DPDK CPUs in the NSX Edge rack ESXi servers is  a potentially good design decision?
  • Possible additional NSX use cases being considered
    • NSX for public clouds
      • NSX OVS and an agent is deployed to in guest – a technical preview of this solution was demoed by Pat Gelsinger during the opening key note on day 1 of VMworld.
    • NSX for containers
      • 2 vSwitches
        • 1 in guest
        • 1 in Hypervisor

 

My thoughts

I like what I heard from the Bruce about the key development focus areas for NSX and looks like all of us, partners & customers of VMware NSX alike, are in for some really cool, business enabling treats from NSX going forward, which kind of reminds me of when vSphere first came out about 20 years ago :-). I am extremely excited about the opportunities NSX present to remove what is often the biggest bottleneck enterprise or corporate IT teams have to overcome to simply get things done quickly and that is the legacy network they have. Networks in most organisations  are still very much managed by an old school minded, networking team that do not necessarily understand the convergence of networking with other silos in the data center such as storage and compute, and most importantly when it comes to convergence with modern day applications. It is a fact that software defined networking will bring the efficiency to the networking the way vSphere brought efficiency to compute (want examples how this SDN efficiency is playing today? Look at AWS and Azure as the 2 biggest use cases) where the ability to spin up infrastructure, along with a “virtual” networking layer significantly increases the convenience for the businesses to consume IT (no waiting around for weeks for your networking team to set up new switches with some new VLANs…etc.) as well as significantly decreasing the go to market time for those businesses when it comes to launching new products / money making opportunities. All in all, NSX will act as a key enabler for any business, regardless of the size to have an agile approach to IT and even embrace cloud platforms.

From my perspective, NSX will provide the same, public cloud inspired advantages to customers own data center and not only that but it will go a step further by effectively converting your WAN to an extended LAN by bridging your LAN with a remote network / data center / Public cloud platform to create something like a LAN/WAN (Read LAN over WAN – Trade mark belongs to me :-))which can automatically get deployed, secured (encryption) while also being very application centric (read “App developers can request networking configuration through an API as a part of the app provisioning stage which can automatically apply all the networking settings including creating various networking segments, routing in between & the firewall requirements…etc. Such networking can be provisioned all the way from a container instance where part of the app is running (i.e. DB server instance as a container service) to a public cloud platform which host the other parts (i.e. Web servers).

I’ve always believed that the NSX solution offering is going to be hugely powerful given its various applications and use cases and natural evolution of the NSX platform through the focus areas like those mentioned above will only make it an absolute must have for all customers, in my humble view.

 

Day 2 – Meeting with Pat Gelsinger and Q&A’s during the exclusive vExpert gathering

vExpert IMG_5750

As interesting as the breakout sessions during the day have been, this was by far the most significant couple of hours for me on the day. As a #vExpert, I was invited to an off site, vExpert only gathering held at Vegas Mob Museum which happened to include VMware CEO, Pat Gelsinger as the guest of honour. Big thanks to the VMware community team lead by Corey Romero (@vCommunityGuy) for organising this event.

This was an intimate gathering for about 80-100 VMware vExperts who were present at VMworld to meet up at an off site venue and discuss things and also to give everyone a chance to meet with VMware CEO and ask him direct questions, which is something you wouldn’t normally get as an ordinary person so it was pretty good. Pat was pretty awesome as he gave a quick speech about the importance of vExpert community to VMware followed up by a Q&A session where we all had a chance to ask him questions on various fronts. I myself started the Q&A session by asking him the obvious question, “What would be the real impact on VMware once the Dell-EMC merger completes” and Pats answer was pretty straight forward. As Michael Dell (who happened to come on stage during the opening day key note speech said it himself), Dell is pretty impressed with the large ecosystem of VMware partners (most of whom are Dell competitors) and will keep that ecosystem intact going forward and Pat echoed the same  message, while also hinting that Dell hardware will play a key role in all VMware product integrations, including using Dell HW by default in most pre-validated and hyper-converged solution offerings going forward, such as using Dell rack mount servers in VCE solutions….etc. (in Pat’s view, Cisco will still play a big role in blade based VCE solution offerings and they are unlikely to walk away from it all just because of Dell integration given the substantial size of revenue that business brings to Cisco).

If I read in between the lines correctly (may be incorrect interpretations from my end here),  he also alluded that the real catch of the EMC acquisition as far as Dell was concerned was VMware. Pat explained that most of the financing charges behind the capital raised by Dell will need to be paid through EMC business’s annual run rate revenue (which by the way is roughly the same as the financing interest) so in a way, Dell received VMware for free and given their large ecosystem of partners all contributing towards VMware’s revenue, it is very likely Dell will continue to let VMware run as an independent entity.

There were other interesting questions from the audience and some of the key points made by Pat in answering those questions were,

  • VMware are fully committed to increasing NSX adoption by customers and sees NSX as a key revenue generator due to what it brings to the table – I agree 100%
  • VMware are working on the ability to provide networking customers through NSX, a capability similar to VMotion for compute as one of their (NSX business units) key goals. Pat mentioned that engineering in fact have this figured out already and testing internally but not quite production ready.
  • In relation to VMware’s Cross Cloud Services as a service offering (announced by Pat during the event opening keynote speech), VMware are also working on offering NSX as a service – Though the detail were not discussed, I’m guessing this would be through the IBM and vCAN partners
  • Hinted that a major announcement on the VMware Photon platform  (One of the VMware vSphere container solutions) will be taking place during VMworld Barcelona – I’ve heard the same from the BU’s engineers too and look forward to Barcelona announcements
  • VMware’s own cloud platform, vCloud air WILL continue to stay focused on targeted use cases while the future scale of VMware’s cloud business will be expected to come from the vCAN partners (hosting providers that use VMware technologies and as a result are part of the VMware vCloud Air Network…i.e IBM)
  • Pat also mentioned about the focus VMware will have on IOT and to this effect, he mentioned about the custom IOT solution VMware have already built or working on (I cannot quite remember which was it) for monitoring health devices through the Android platform – I’m guessing this is through their project ICE and LIOTA (Little IOT Agent) platform which already had similar device monitoring solutions being demoed in the solutions exchange during VMworld 2016. I mentioned about that during my previous post here

It was really good to have had the chance to listen to Pat up close and be able to ask direct questions and get frank answers which was a fine way to end a productive and an education day for me at VMworld 2016 US

Image credit goes to VMware..!!

Cheers

Chan

 

 

VMworld 2016 US – Key Announcements From Day 2

A quick summary of this morning’s key note speech at VMworld 2’016 US and few annoucements.

Opening Keynote Speech

The morning keynote speech was hosted by Sunjay Poonan (@Spoonan), who heads up the EUC BU within VMware. Sunjay’s speech was pretty much in line with the general VMware focus areas, mentioned yesterdays key note by Pat Gelsinger which is a complete solution that enable customers of todays enterprises & corporates the ability to use any device, any app & any cloud platform as they see fit without having to worry about workload mobility, cross platform management and monitoring.

While yesterdays session was more focused on the server side of things, Sunjay’s message today was focused on the End User Computing side of things, predictably to a bigger degree. The initial messaging was around the VMware Workspace One suite.

Workspace One suite with VMware identity manager appears to be focusing more and more on the following 3 key areas which are key to todays enterprise IT.

  • Apps and identity
  • Desktop & Mobile
  • Management & Security

Workspace one integration with mobile devices to push out corporate apps on mobile devices similar to Apple app store like interface was demoed which emphasize the slick capabilities of the solution which really appears to be ready for primetime now. He also demoed the conditional access capabilities wihtin the Horizon Workspace suite that prevents data sharing between managed and unmanaged apps. The conditional access can also be extended out to NSX to utilise micro segmentation hand in hand to provide even tighter security which is quite handy.

Stephanie Buscemi – EVP of Salesforce came on stage to talk about how they use VMware Wotrkspace suite to empower their sales people to work on the go which was pretty cool I thought, though there was a little marketing undertone to the whole pitch.

  • My take: Personally I dont cover EUC offerings that much myself though I have a good awareness of their Digital Workspace strategy and have also had hands on design and experience with the Horizon View from back in the days. However I can see the EUC offering from VMware getting better and better every day over the last 5 odd years and dare I say, right now, its one of the best solutions out there for most customers if not the best, given its feature set and the integration to other VMware and non VMware compoenents in the back end data center and Cloud. If you are looking at any EUC solutions, this should be on top of your list to investigate / evaluate.

Endpoint Security

VMware TrustPoint powered by Tanium was showcased and its integration with AirWatch to provide a mobile device management solution togewther with a comprehensive security solution that can track devices and their activities real time (no database-full of old device activity info) and apply security controls real time too. This looked a very attractive proposition given the security concerns of the todays enterprise and I can see where this would add value, provided that the costs stack up.

Free VMware Fusion and Workstation license annouicement

VMware also annouced today thye availability of VMware Fusion and Workstation free liceses to all VMworld attendees through the VMworld 2016 app (already claimed mine) – pretty cool huh?

Cloud Native Applications

Kit Colbert, Cloud Native CTO at VMware spoke about the challenges of using the containerised apps in the enterprise environments which currently lacks a comprehensive management solution. Having been looking at containerisation myself and its practical use for majority of ordinary customers, I can relate to that too myself, especially when you compare managing applications based on containers like Docker to legacy appications that run on a dedicated OSE (Windows, Linux…etc) which can be managed, tracked and monitored with session & data persistence that is lacking in a container instance to a level withouth 3rd party components.

Today, couple of new additional features have been annouced on VIC as folows (If you are new to VIC, refer to my intro blog post here)

  • New: Container registry
  • New: Container management portal

1

vSphere Integrated Containers beta programme is also now available if you want to have a look at http://learn.vmware.com/vicbeta

 

VMware Integrated OpenStack (VIO)

Also, VIO 3.0 was oifficially annouced today by Kit. I was privy to this information beforehand due to a vExpert only briefing for the same but was not able to disclose anything due to embargo until now.

VIO is a VMware customised distro of OpenStack and the below slide should give you an intro for those of you who aren’t familer with VIO all that well.

VIO1

Running native OpenStack is a bit of a nightmare as it requires lots of skills and resources which restricts its proper production use to large scale organisations with plenty of technical expertise and resources. Based on my experience, lots of customers that I know who’ve initially started out with ambitious (vanila) OpenStack projects have decided to abandon half way through due to complexity…etc. to switch back to vSphere. VIO attempts to solve this somewhat to help customers run OpenStack with a VMware flavour to make things easier for mass customer adoptoin.

The annoucements for VIO was the release of the VIO 3.0 which has the following key features / improvements

  • Mitaka Based
    • VIO 3.0 distribution is now based on the latest OpenStack release (Mitaka)
    • Leverage the latest features and enhancement of the Mitaka Release
      • Improved day-to-day experience for cloud admins and administrators.
      • Simplified configuration for Nova compute service.
      • Streamlined Keystone identity service is now a one-step process for setting up the identity management features of a cloud network.
      • Keystone now supports multi-backend allowing local authentication and AD accounts simultaneously.
      • Heat’s convergence engine optimized to handle larger loads and more complex actions for horizontal scaling for improved performance for stateless mode.
      • Enhanced OpenStack Client provides a consistent set of calls for creating resources no longer requiring the need to learn the intricacies of each service API.
      • Support for software development kits (SDKs) in various languages.
        –New “give me a network,” feature capable of creating a network, attaching a server to it, assigning an IP to that server, and making the network accessible, in a single action
  • Compact VIO control pane
    • VIO management control plane has been optimized and architected to run in a compact architecture   VIO
      • Reduces infrastructure and costs required to run an OpenStack Cloud
      • Ideal for multiple small deployments
      • Attractive in relaxed SLA scenarios
      • Database backed up in real time: No data loss
    • Slimmer HA architecture
      • VIO0
      • educed footprint on management cluster
      • Full HA: No service downtime
      • Database replication: No data loss
      • 6000+ VMs
      • 200+ Hypervisors
  • Import existing vSphere workloads
    • Existing vSphere VMs can be imported and managed via VIO OpenStack APIs
      • Quickly import vSphere VMs into VIO
      • Start managing vSphere VMs through standard OpenStack APIs
    • Quickly start consuming existing VMs through OpenStack

 

Nike CTO who’s a VIo customer came on stage to discuss how Nike deployed a large greenfield OpenStack deployment using VMware Integrated OpenStack (VIO) and an EUC solution at all Nike outlets / shops using Airwatch which was a good testement for customer confidence though it did have a little markletting undertone to it all.

 

NSX

the head of the NSX business unit within VMware highlighted the key advancements NSX have made and the 400% YoY growth of adoption from fee paying customers deploying NSX to benefit from Micro segmentation (through the distributed firewall capability) and automation and orchestration. NSX roadmap extends far beyond what you can imagine as its current usecases and its sufficient to say that NSX will play a being part as an enabler for customers to freely move their workloads from onbe place (i.e. On premise) to a Public cloud (i.e. AWS) through the dynamic extension of L2 adjacency and other LAN services, transforming the WAN in to an extended LAN.

To this effect, VMware also announced the availability of a free NSX Pre-Assessment which is now intended to enable customers to employ the Assess -> Plan -> Enforce -> Monitor approcah to NSX adoption.

 

VSAN

Yanbing Li, whos the VSAN business unit head came on stage and discussed the hugh demand from customers in VSAN which currently stands over 5000 fee paying customers using VSAN in production as the preferred storage for vSphere. The following roadmap items were also mentioned for VSAN

  • VSAN is the default supported storage platform for VIO and Photon.
  • Intelligent performance analytics & policies in VSAN for proactive management
  • Fully integreated software defined encryption for VSAN

There are couple of other new features coming out soon which I am fully aware of but were not annouced during VMworld 2016 US so im guressing they’ll be annouced during the Barcelona event? (I cannot disclose until then of course :-))

All in all, not a large number of new product or feature accouncements on day 2. But the key message is NSX & VSAN are key focius areads (we already knew this) and VIC & VIO will continue to be improved which is good to see.

 

Slide credit goes to VMware

 

Cheers

Chan

 

 

vSphere Integrated Containers – My thoughts

Capture

During the VMworld 2016, one thing that struck me was the continued focus VMware appears to have on containerisation. I have been looking at containerisation over the last year and half with interest to understand the conept, current capabilities of the available platforms and the practical use for the typical customer. I was also naturally keen on what companies such as VMware and Microsoft have to offer on the same front. VMware annouced number of initiatives such as vSphere Integrated Containers & the Photon platform during VMworld 2015 as their answers to the containerisation and having been looking at their solutions, and also having seen & listened to various speakers / engineers / evagelists during the VMworld 2016 US event, it kind of emphesized the need for me to venture further in to containerisation and especially, VMware’s solutions to containerisation. So Im gonna begin with a quick intro blog post about one of VMware’s approach to containers and what my thoughts are on the solution. I will aim to provide future posts to dig deeper in to th architecture and the deployment apsect of it…etc.

On the front of containers, VMware’s strategy is focused on 2 key solution offerings, vSphere Integrated Containers and Photon platform. While the Photon solution is not yet quite ready for production deployment in my view, its aimed at all greenfield customers who currently do not have legacy vSphere deployments and are strating out afresh. VIC on the other hand is available today & specially aimed at existring vSphere customers hence the main focus of this post.

vSphere Integrated Containers (VIC)

This is the containerisation solution for existing VMware vSphere customers and has been designed for extending vSphere capabilities to the containerised world (or vise versa, depending on how you look at it). It is predominantly aimed at existing vSphere customers who are wanting to jump on to or explore containerised app development for production use.

For those of you who are new to VIC, here’s a quick intro.

In addiiton to typical vSphere components, VIC solution itself consist of 3 main components

  1. VIC Engine – A container run time for vSphere which is deployed on to ESXi. This is an OpenSource development and is available on GitHub. This allows developpers familer wiyth Docker container developments to deploy them alongside existing VMs on an ESXi / vSphere platform and is directly manageable from using the vSphere UI (Web Client). VIC engine is referred to as VCH (Virtual Container Host) and is backed by a vSphere resource pool typically within a cluster. It also containes a copy of the conatainer images which are mapped as vmdk’s on tradiitonal vSphere components such as a VSAN datastore.                    vch-endpoint
  2. Harbour – An enterprise class registry service that stores & distributes Docker images that also include additional security, identity and management for the enterprise. Can be used as a lovcal, on-premise Docker repository so that enterprises using Docker containers won’t have to worry about the security concerns of using the public Docker repository over internet
  3. Admiral – Scalable, lightweight container management playtform used to deploy and manage container based applications

Together with vSphere, VIC provide the customers the ability to deliver a containr based solution in a production environment without having to build a dedicated environment exclusively for the containers.

The main difference between a native container approach such as native Docker on Linux Vs VIC is that,

  • Docker on Linux:  Docker outilises native Linux concept called namespaces. While more inforamtion can be found here, Docker on Linux relies on spewing multiple namespaces / containers within the same Linux server instance so spinning up an applicatiojn service 9that runs inside a container) is super fast (say, compared to powering on a VM with a full blown OS which takes time to load up and then launch the application). Same applies when you stop an application service (just stops the underlying container on th eLinux kerner). Both these operations are executed in memory. Containers
  • VMware Integrated Containers:  The container instance runs in a dedicated, micro OSE (Operating System Environment) called JeVM (Just Enough VM) which consist of a minimalistick version of Linux kernel that is just sufficient to run a container instance.. This kernal is derived from VMware’s project Photon. Photon platform itself is seperate to VIC solution and is supposed to be the second approach VMware are taking for conatiners and Cloud Native Applications, especifically aimed at greenfield deployments where you do not have an existing vSphere stack. in the case of VIC, it is important to remember that the Photon project code used within this micro VM consist of the minimal requirements to run a Docker container instance (Linux kernel and few addiitonal supporting resources giving it a minimum footprint). This Je VM instance is also using the instance clone feature available on vSphere 6.0 to quickly spin up Je VM’s for container instantiation (upon “docker run” for exmaple) so they strats up and closes down at near native speeds to that of a native container on Linux. In return for this fat client approach, customer gets a similar experience when it comes to managing these conatiner environments to that of thatier legacy infrastructure as the existing VMware tools such as vROPS, NSX…etc are all compatible with them (no such compatibility when runniong native Linux containers with Docker)

VIC3

The typical VIC architecture looks like below

VIC2

At the foundation of VIC is vSphere, the same infrastructure that customers have standardized on for all applications from test/dev to business critical apps. VIC adds a graphical plug in to the Web Client for management and monitoring. The Virtual Container Host provides a Docker API endpoint backed by a vSphere resource pool – beyond one VM or dedicated physical host. Instant Clone Template is running Photon OS Linux kernel. Developers interact from standard Docker command line interfaces or API clients. Docker commands are mapped to corresponding vSphere actions by the VCH. A request to run a new image invokes Instant Clone to rapidly fork new “just enough” VMs (Je VM) for execution of the container. Traditional apps can also run alongside containers on the VCH.

As for my thoughts, if you are an existing VMware customer, VIC gives you get the best of both worlds where you can benefit from the existing infrastructure while also benefiting from the agility available through the use of Docker container instances. For example, during the VMworld 2016 US event, VMware’s head of Cloud Native Applications BU, Kit Colbert demoed the integration of vSphere Integrated Containers with vROPS where even containerised apps can have the typical health and performance details shown via vROPS dashboards, much like legacy apps and such capabilities that are not natively available with vanila Docker instances. He also demoed the vRA integration which enables developers to self service containerised application storage placement through a policy change which automatically move the container VM / image content over from one VSAN storage tier to another. I believe such inter-operability and integration with th elegacy toolkit is very important for mass adoption of containerised apps going forward, especially for existing customers with legacy tools and apps. Furthermore, VIC solution also integrate with NSX for extending networking security components in to the container VMs / instance too which is totally cool.

Most importantly, VIC is available free as an opensource download for all VMware customers which makes the case for it even more appealing.

Cheers

Chan

P.S. Slide credit goes to VMware

#Cloud Native #VIC #Photon #VMware #VMworld

VMworld 2016 US – Key Announcements From Day 1

Pat gelsinger

So the much awaited VMworld 2016 US event kicked off today amongst much fanfare and I was lucky to be one of them there at the event. Given below are the key highlights from the day 1 general session & the key annoucements made by VMware CEO Pat Gelsinger. I’ve highlighted the key items.

Theme of this years VMworld is Be Tomorrow. This is quite fitting as technology today defines the tomorrow for the world and we as the IT community plays a key part in this along with vendors like VMware who defines / invent most of those technologies.

Pat mentioned that for VMware and their future direction, the Cloud is key. Both Public and Private cloud are going to define many IT requirements of tomorrow which I fully agree with and VMware’s aim appears to be to move away from the traditional vSphere based compute virtualisation to become a facilitator of cross cloud workload mobility and management.

He also discussed the status of where the current public and private cloud adoption is at, which is presently heavily biased towards the public cloud rather than private cloud adoption, which inharently is quite difficult to retro fit to a legacy enviornment based on my experience too. Based on VMware research and market analytics, thre current IT platform adoption is split as below

  • Public Cloud = 15%
  • Private Cloud = 12%
  • Traditional IT = 73%

Current Cloud Split

According to Pat it will not be around 2021 that the public Vs private cloud usage adoption achieve similar levels and by 2030, they expect the adoptoin rates to be (approximately) as follows

  • Public Cloud =52%
  • Private Cloud = 29%
  • Traditional IT = 19%

From then, the tone shifted to look at VMware’s role in this evolving market. It is pretty obvioius that VMware as a vendor, been diversifying their product positioning to rely less on the core vSphere stack but to focus more on the Cloud management and other software defined offerings for the last few years. This was made possible through the use of vSphere + NSX + VSAN for the SDDC for those who wanted a traditional IT environment or a private cloud platform with vRealize Suite sat on top to provide a common management and monitoring platform (Cloud Management Portal). These have been quite popular and some key highlights mentioned were,

  • vSphere the market leader in Virtualisation – Software Defined Compute
  • VSAN now has over 5000 fee paying customers & growing – Software Defined Storage
  • NSX has 400% YoY growth in adoption – Software Defined Networking
  • vRealize Suite is the most popular Cloud management portal in the industry

Todays main annoucement brings these solutions together in to VMware Cloud Foundation with Cross Cloud Services support. Cross Cloud Architecture annouced as a technical preview today effectively focuses on centralizing the followings across various deifferent private and public cloud platforms

  • Management,
  • Operations
  • Security
  • Networking (the most important one for me)

This tech preview platform initially will support Publci clouds (Azure, AWS, Google Cloud, vCloud Air) as well as vCloud Air Network Partners and private cloud instances

Chris-Wolf-Day-1-Recap-image

The below graphic annouces the Corss cloud services model and the solution proposition quite well. One of the key interesting part of this annoucement is that throuh the IBM partnership, these cross cloud services will be made available as SaS offering (Software as a Service) which require no local installation or PS heavy deployment of management and monitoring components on premise. It would be interesting to see the details of what this means,  and cannot wait to get my hands on the tools once available to look deeper in to details and what that means for the average customers.

2016-08-29_13-15-50

Based on Pat’s description, Cross Cloud Services solution is designed to facilitate moving of applications between private and various public clouds with minimal disruption / effort for the customers.

They also showed a demo of this being in action which was really really impressive. It is pretty obvious that for true cross cloud connectivity and flexbility when it comes to moving applications..etc, one of the key blockers has been the networking restrictions such as the lack of easily available L2 adjacency….etc. VMware are in a prime position to address this through the SDN platform they have in NSX and the demo showed clearly the NSX integration with AWS that automatically deployed an L2 Edge gateway (software) devices in front of AWS Virtual datacenter to offer L2 connectivity back to customers on premise to extend the LAN capability as a key facilitator to enable being able to move a workload from AWS to On-Premise and back. (Think WAN is transformed in to an extended LAN with NSX). I’ve always seen this coming and also discussed with my customers various other posibilities like this that NSX brings on to the table and its nice to see that these capabilities are now being integrated in to othermanagement and monitoring platforms to proviude a true single pane of glass solution for multi cloud management.

The solution demo also included the Arkin integration of the same platfrom (VMware aquired Arkin recently) and it brings the security monitoring and anlytics capability to the platform which is totally awesome..!! I’ve already seen the extensively capability of visualizing networking flow and security contexts of vRealize Network Insight (rebranded Arkin solution) previously but its really good to see that bieng integrated to this Software as a Sevrice Offering. This solution also include traffic encryption capability, even within a public cloud platform like Amazon that you do not get by default which would go a long way towards deploying workloads siubject to regulatory compliance on public cloud platforms.

These new annoucements form the basis of the VMwares vision of Any device (through the use of Airwatch), Any application (through the use of Workspace one) and any cloud (now available through the Cross Cloud arhitecture) message that enable their customers to simply their modern day IT operations increse agility, efficiency and productivity.

Cross Cloud

Slide credit goes to VMware

You can find more details in the following links

Cheers

Chan

#NSX #vSphere #VSAN #CrossCloudServices #VmwareCloudFoundation