1. VMware vSphere 6.x – Deployment Architecture Key Notes

<-Home Page for VMware vSphere 6.x articles

First thing to do in a vSphere 6.x deployment is to understand the new deployment architecture options available on the vSphere 6.0 platform, which is somewhat different from the previous versions of vSphere. The below will highlight key information but is not a complete guide to all the changes..etc. For that I’d advise you to refer to the official vSphere documentation (found here)

Deployment Architecture

The deployment architecture for vSphere 6 is somewhat different from the legacy versions. I’m not going to document all of the architectural deference’s  (Please refer to the VMware product documentation for vSphere 6) but I will mention few of the key ones which I think are important, in a bullet point below.

  • vCenter Server – Consist of 2 key components
    • Platform Service Controller (PSC)
      • PSC include the following components
        • SSO
        • vSphere Licensing Server
        • VMCA – VMware Certificate Authority (a built in SSL certification authority to simply certificate provisioning to all VMware products including vCenter, ESXi, vRealize Automation….etc. The idea is you associate this to your existing enterprise root CA or a subordinate CA such as a Microsoft CA and point all VMware components at this.)
      • PSC can be deployed as an appliance or on a windows machine
    • vCenter Server
      • Appliance (vCSA) – Include the following services
        • vCenter Inventory server
        • PostgreSQL
        • vSphere Web Client
        • vSphere ESXi Dump collector
        • Syslog collector
        • Syslog Service
        • Auto Deploy
      • Windows version is also available.

Note: ESXi remains the same as before without any significant changes to its core architecture or the installation process.

Deployment Options

What’s in red below are the deployment options that I will be using in the subsequent sections to deploy vSphere 6 u1 as they represent the likely choices adopted during most of the enterprise deployments.

  • Platform Services Controller Deployment
    • Option 1 – Embedded with vCenter
      • Only suitable for small deployments
    • Option 2 – External – Dedicated separate deployment of PSC to which external vCenter(s) will connect to
      • Single PSC instance or a clustered PSC deployment consisting of multiple instances is supported
      • 2 options supported here.
        • Deploy an external PSC on Windows
        • Deploy an external PSC using the Linux based appliance (note that this option involves deploying the same vCSA appliance but during deployment, select the PSC mode rather than vCenter)
    • PSC need to be deployed first, followed by vCenter deployment as concurrent deployment of both are NOT supported!
  • vCenter Server Deployment – vCenter Deployment architecture consist of 2 choices
    • Windows deployment
      • Option 1: with a built in Postgre SQL
        • Only supported for a small – medium sized environment (20 hosts or 200VMs)
      • Option 2: with an external database system
        • Only external database system supported is Oracle (no more SQL databases for vCenter)
      • This effectively mean that you are now advised (indirectly, in my view) to always deploy the vCSA version as opposed to the Windows version of vCenter, especially since the feature parity between vCSA and Windows vCenter versions are now bridged
    • vCSA (appliance) deployment
      • Option 1: with a built in Postgre SQL DB
        • Supported for up to 1000 hosts and 10,000 VMs (This I reckon would be the most common deployment model now for vCSA due to the supported scalability and the simplicity)
      • Option 2: with an external database system
        • As with the Windows version, only Oracle is supported as an external DB system

PSC and vCenter deployment topologies

Certificate Concerns

  • VMCA is a complete Certificate Authority for all vSphere and related components where the vSphere related certificate issuing process is automated (happens automatically during adding vCenter servers to PSC & adding ESXi servers to vCenter).
  • For those who already have a Microsoft CA or a similar enterprise CA, the recommendation is to make the VMCA a subordinate CA so that all certificates allocated from VMCA to all vSphere components will have the full certificate chain, all the way from your Microsoft root CA(i.e. Microsoft Root CA cert->Subordinate CA cert->VMCA Root CA cert->Allocated cert, for the vSphere components).
  • In order to achieve this, the following steps need to be followed in the listed order.
    • Install the PSC / Deploy the PSC appliance first
    • Use an existing root / enterprise CA (i.e. Microsoft CA) to generate a subordinate CA certificate for the VMCA and replace the default VMCA root certificate on the PSC.
      • To achieve this, follow the VMware KB articles listed here.
      • Once the certificate replacement is complete on the PSC, do follow the “Task 0” outlined here to ensure that the vSphere service registrations with the VMware lookup service are also update. If not, you’ll have to follow the “Task 1 – 4” to manually update the sslTrust parameter value for the service registration using the ls_update_certs.py script (available on the PSC appliance). Validating this here can save you lots of headache down the line.
    • Now Install vCenter & point at the PSC for SSO (VMCA will automatically allocate appropriate certificates)
    • Add ESXi hosts (VMCA will automatically allocate appropriate certificates)

Key System Requirements

  • ESXi system requirements
    • Physical components
      • Need a minimum of 2 CPU cores per host
      • HCL compatibility (CPU released after sept 2006 only)
      • NX/SD bit enabled in BIOS
      • Intel VT-x enabled
      • SATA disks will be considered remote (meaning, no scratch partition on SATA)
    • Booting
      • Booting from UEFI is supported
      • But no auto deploy or network booting with UEFI
    • Local Storage
      • Disks
        • Recommended for booting from local disk is 5.2GB (for VMFS and the 4GB scratch partition)
        • Supported minimum is 1GB
          • Scratch partition created on another local disk or RAMDISK (/tmp/ramdisk) – Not recommended to be left on ramdisk for performance & memory optimisation
      • USB / SD
        • Installer DOES NOT create scratch on these drives
        • Either creates the scratch partition on another local disk or ramdisk
        • 4GB or larger recommended (though min supported is 1GB)
          • Additional space used for the core dump
        • 16GB or larger is highly recommended
          • Prolongs the flash cell life
  • vCenter Server System Requirements
    • Windows version
      • Must be connected to a domain
      • Hardware
        • PSC – 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM
    • Appliance version
      • Virtual Hardware
        • PSC- 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM

In the next post, we’ll look at the key deployment steps involved.

Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms

Win 2013

It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016). I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.

What’s new (Licensing wise)?

Here are the 2 key licensing changes.

  1. From Windows Server 2016 onwards, licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU
  2. A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server. Note that this is the minimum you can buy. If your server has additional cores, you need to buy additional licenses in packs of 2. So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL)

The most obvious change is the announcement of core based Windows server licensing. Yeah you read it correct…!! Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead. And they don’t stop there…. One might expect if they switch to a CPU core based licensing model, that those with smaller cores per CPU socket (4 or 6) would benefit from it, right? Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, would be the same as Windows server 2012 licensing revenue based on CPU sockets).

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license. In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.

Windows 2016 Server licensing cost comparison

 

The difference in cost to customers

Take the following scenario for example..

You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server. Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

  • List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)
  • Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37

The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows

  • Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00
  • After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60

Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices). But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs…. This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age. And now they are effectively taxing you on the core count too?? That’s possibly not short of a day light robbery scenario for those Microsoft customers.

FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.

  • Dual CPU server with 10 cores per CPU = 125% Increment
  • Dual CPU server with 12 cores per CPU = 150% Increment
  • Dual CPU server with 14 cores per CPU = 175% Increment
  • Dual CPU server with 18 cores per CPU = 225% Increment

Now this is based on todays technology. No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.

My Take

It is pretty obvious what MS is attempting to achieve here. With the ever increasing core count in CPUs, 2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs. We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).

I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.

Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway. Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud). If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.

In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming…. However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.. and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing. So possibly the time is right to bite the bullet and get on with embracing Linux??

So, my advise for customers is as follows

Tactical:

  1. Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc. The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
  2. Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term. So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do. Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have. You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.

Strategic:

  1. Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements. Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too)
  2. Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach. Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills. Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market. And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs. You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.

This article and the views expressed in it are mine alone.

Comments / Thoughts are welcome

Chan

P.S. This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan. I hope enough customer pressure would hopefully cause Microsoft to back off too….