Time of the Hybrid Cloud?

Hybrid Cloud

A little blog on something slightly less technical but equally important today. Not a marketing piece but just my thoughts on something I came across that I thought would be worth writing something about.

Background

I came across an interesting article this morning based on a Gartner research on last years global IT spend where it was revealed that global IT spent was down by about $216 Billion during 2015. However during the same year data center IT spend was up by 1.8% and is forecasted to go up to 3% within 2016. Everyone from IT vendors to resellers to every IT sales person you come across these days, on Internet blogs / news / LinkedIn or out in the field seem to believe (and make you believe) that the customer owned data center is dead for good and everything is or should be moving to the cloud (Public cloud that is). If all that is true, it made me wonder how the data center spend went up when in fact that should have gone down? One might think this data center spend itself was possibly fuelled by the growth in the public cloud infrastructure expansion due to increased demand on Public cloud platforms like Microsoft Azure and Amazon AWS. Make total sense right? Perhaps in the outset. But upon closer inspection, there’s a slightly complicated story, the way I see it.

 

Part 1 – Contribution from the Public cloud

Public cloud platforms like AWS are growing fast and aggressively and there’s no denying that. They address a need in the industry to be able to use a global, shared platform that can scale infinitely on demand and due to the sheer economy of scale these shared platform providers have, customers benefit from cheaper IT costs, especially compared to having to spec up a data center for your occasional peak requirements (that may only be hit once a month) and having to pay for it all upfront regardless of the actual utilisation can be an expensive exercise for many. With a Public cloud platform, the up front cost is cheaper and you pay per usage which makes it an attractive platform for many. Sure there are more benefits of using a public cloud platform than just the cost factor, but essentially “the cost” has always been the most key underpinning driver for enterprises to adopt public cloud since its inception. Most new start ups (Netflix’s of the world) and even some established enterprise customers who don’t have the baggage of legacy apps, (By legacy apps, I’m referring to client-server type of applications typically run on Microsoft Windows platform), are by default electing to predominantly use a cheaper Public cloud platform like AWS to locate their business application stack without owning their own data center kit. This will continue to be the case for those customers and therefore will continue to drive the expansion of Public cloud platforms like AWS. And I’m sure a significant portion of the growth of the data center spend in 2015 would have come from the increase of these pure Public cloud usage causing the cloud providers to buy yet more data center hardware.

 

Part 2 – Contribution from the “Other” cloud

The point is however, not all the data center spend increment within 2015 would have come from just Public cloud platforms like AWS or Azure buying extra kit for their data centres. When you look at numbers from traditional hardware vendors, HP’s numbers appear to be up by around 25% for the year and others such as Dell, Cisco, EMC also appear to have grown their sales in 2015 which appear to have contributed towards this increased data center spend.  It is no secret that none of these public cloud platforms use traditional data center hardware vendors kit in their Public cloud data centres.  They often use commodity hardware or even build servers & networking equipment themselves (lot cheaper). So  where would the increased sales for these vendors have come from? My guess is that they likely have come from most enterprise customers deploying Hybrid Cloud solutions that involves customers own hardware being deployed in their own  / co-location / off prem / hosted data centres (customer still own their kit) along with using an enterprise friendly Public cloud platform (mostly Microsoft Azure or VMware vCloud Air) acting as just another segment of their overall data center strategy. If you consider most of the established enterprise customers, the chances are that they have lots of legacy applications that are not always cloud friendly. By legacy applications, I mean typical WINTEL applications that typically conform to the client server architecture. These apps would have started life in the enterprise since Windows NT / 2000 days and have grown with their business over time. These applications are typically not cloud friendly (industry buzz word is “Cloud Native”) and often moving these as is on to a Public cloud platform like AWS or Azure is commercially or technically not feasible for most enterprises. (I’ve been working in the industry since Windows 2000 days and I can assure you that these type of apps still make up a significant number out there). And this “baggage” often prevents many enterprises from purely using just Public cloud (sure there are other things like compliance that gets in the way too of Public cloud but over time, Public cloud system will naturally begin to cater properly for compliance requirements…etc. so these obstacles would be short lived). While a small number of those enterprises will have the engineering budget and the resources necessary to re-design and re-develop these legacy app stacks to be a more modern & cloud native stack, most of them will not have that luxury. Often such redevelopment work are expensive and most importantly, time consuming and disruptive.

So, for most of these customers, the immediate tactical solution is to resort to a Hybrid cloud solution where the legacy “baggage” app stack live on a legacy data center and all newly developed apps will likely be developed as cloud native (designed and developed from ground up) on an enterprise friendly Public cloud system such as Microsoft Azure or VMware vCloud Air. An overarching IT operations management platform (industry buzz word “Cloud Management Platform”) will then manage both the customer owned (private) portion and the Public portion of the Hybrid cloud solution seamlessly (with caveats of course). I think this is what has been happening in 2015 and this may also explain the growth of legacy hardware vendor sales at the same time. Since I work for a fairly large global reseller, I’ve witnessed this increased hardware sales first hand from the traditional data center hardware vendor partners (HP, Cisco…etc.) through our business too which adds up. I believe this adoption of Hybrid cloud solutions will continue through out 2016 and possibly beyond for a good while, at least until such time that all legacy apps are eventually all phased out but that could be a long while away.

 

Summary

So there you have it. In my view, Public cloud will continue to grow but if you think that it will replace customer owned data center kit anytime soon, that’s probably unlikely. At least 2015 has proved that both Public cloud and Private cloud platforms (through the guise of Hybrid cloud) have grown together and my thoughts are that this will continue to be the case for a good while. Who knows, I may well be proven wrong and within 6 months, AWZ & Azure & Google Public clouds will devour all private cloud platforms and everybody would be happy on just Public cloud :-). But the common sense suggest otherwise. I can see lot more Hybrid cloud deployments in the immediate future (at least few years) using mainly Microsoft Azure and VMware vCloud Air platforms.  Based on technologies available today, these 2 in my view stand out as probably the best suited Public cloud platforms with a strong Hybrid cloud compatibility given their already popular presence in the enterprise data center (for hosting legacy apps efficiently) as well as each having a good overarching cloud management platform that customers can use to manage their Hybrid Cloud environments with.

 

Thoughts and comments are welcome….!!

 

Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms

Win 2013

It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016). I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.

What’s new (Licensing wise)?

Here are the 2 key licensing changes.

  1. From Windows Server 2016 onwards, licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU
  2. A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server. Note that this is the minimum you can buy. If your server has additional cores, you need to buy additional licenses in packs of 2. So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL)

The most obvious change is the announcement of core based Windows server licensing. Yeah you read it correct…!! Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead. And they don’t stop there…. One might expect if they switch to a CPU core based licensing model, that those with smaller cores per CPU socket (4 or 6) would benefit from it, right? Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, would be the same as Windows server 2012 licensing revenue based on CPU sockets).

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license. In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.

Windows 2016 Server licensing cost comparison

 

The difference in cost to customers

Take the following scenario for example..

You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server. Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

  • List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)
  • Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37

The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows

  • Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00
  • After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60

Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices). But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs…. This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age. And now they are effectively taxing you on the core count too?? That’s possibly not short of a day light robbery scenario for those Microsoft customers.

FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.

  • Dual CPU server with 10 cores per CPU = 125% Increment
  • Dual CPU server with 12 cores per CPU = 150% Increment
  • Dual CPU server with 14 cores per CPU = 175% Increment
  • Dual CPU server with 18 cores per CPU = 225% Increment

Now this is based on todays technology. No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.

My Take

It is pretty obvious what MS is attempting to achieve here. With the ever increasing core count in CPUs, 2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs. We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).

I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.

Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway. Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud). If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.

In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming…. However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.. and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing. So possibly the time is right to bite the bullet and get on with embracing Linux??

So, my advise for customers is as follows

Tactical:

  1. Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc. The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
  2. Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term. So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do. Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have. You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.

Strategic:

  1. Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements. Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too)
  2. Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach. Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills. Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market. And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs. You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.

This article and the views expressed in it are mine alone.

Comments / Thoughts are welcome

Chan

P.S. This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan. I hope enough customer pressure would hopefully cause Microsoft to back off too….