Time of the Hybrid Cloud?

Hybrid Cloud

A little blog on something slightly less technical but equally important today. Not a marketing piece but just my thoughts on something I came across that I thought would be worth writing something about.

Background

I came across an interesting article this morning based on a Gartner research on last years global IT spend where it was revealed that global IT spent was down by about $216 Billion during 2015. However during the same year data center IT spend was up by 1.8% and is forecasted to go up to 3% within 2016. Everyone from IT vendors to resellers to every IT sales person you come across these days, on Internet blogs / news / LinkedIn or out in the field seem to believe (and make you believe) that the customer owned data center is dead for good and everything is or should be moving to the cloud (Public cloud that is). If all that is true, it made me wonder how the data center spend went up when in fact that should have gone down? One might think this data center spend itself was possibly fuelled by the growth in the public cloud infrastructure expansion due to increased demand on Public cloud platforms like Microsoft Azure and Amazon AWS. Make total sense right? Perhaps in the outset. But upon closer inspection, there’s a slightly complicated story, the way I see it.

 

Part 1 – Contribution from the Public cloud

Public cloud platforms like AWS are growing fast and aggressively and there’s no denying that. They address a need in the industry to be able to use a global, shared platform that can scale infinitely on demand and due to the sheer economy of scale these shared platform providers have, customers benefit from cheaper IT costs, especially compared to having to spec up a data center for your occasional peak requirements (that may only be hit once a month) and having to pay for it all upfront regardless of the actual utilisation can be an expensive exercise for many. With a Public cloud platform, the up front cost is cheaper and you pay per usage which makes it an attractive platform for many. Sure there are more benefits of using a public cloud platform than just the cost factor, but essentially “the cost” has always been the most key underpinning driver for enterprises to adopt public cloud since its inception. Most new start ups (Netflix’s of the world) and even some established enterprise customers who don’t have the baggage of legacy apps, (By legacy apps, I’m referring to client-server type of applications typically run on Microsoft Windows platform), are by default electing to predominantly use a cheaper Public cloud platform like AWS to locate their business application stack without owning their own data center kit. This will continue to be the case for those customers and therefore will continue to drive the expansion of Public cloud platforms like AWS. And I’m sure a significant portion of the growth of the data center spend in 2015 would have come from the increase of these pure Public cloud usage causing the cloud providers to buy yet more data center hardware.

 

Part 2 – Contribution from the “Other” cloud

The point is however, not all the data center spend increment within 2015 would have come from just Public cloud platforms like AWS or Azure buying extra kit for their data centres. When you look at numbers from traditional hardware vendors, HP’s numbers appear to be up by around 25% for the year and others such as Dell, Cisco, EMC also appear to have grown their sales in 2015 which appear to have contributed towards this increased data center spend.  It is no secret that none of these public cloud platforms use traditional data center hardware vendors kit in their Public cloud data centres.  They often use commodity hardware or even build servers & networking equipment themselves (lot cheaper). So  where would the increased sales for these vendors have come from? My guess is that they likely have come from most enterprise customers deploying Hybrid Cloud solutions that involves customers own hardware being deployed in their own  / co-location / off prem / hosted data centres (customer still own their kit) along with using an enterprise friendly Public cloud platform (mostly Microsoft Azure or VMware vCloud Air) acting as just another segment of their overall data center strategy. If you consider most of the established enterprise customers, the chances are that they have lots of legacy applications that are not always cloud friendly. By legacy applications, I mean typical WINTEL applications that typically conform to the client server architecture. These apps would have started life in the enterprise since Windows NT / 2000 days and have grown with their business over time. These applications are typically not cloud friendly (industry buzz word is “Cloud Native”) and often moving these as is on to a Public cloud platform like AWS or Azure is commercially or technically not feasible for most enterprises. (I’ve been working in the industry since Windows 2000 days and I can assure you that these type of apps still make up a significant number out there). And this “baggage” often prevents many enterprises from purely using just Public cloud (sure there are other things like compliance that gets in the way too of Public cloud but over time, Public cloud system will naturally begin to cater properly for compliance requirements…etc. so these obstacles would be short lived). While a small number of those enterprises will have the engineering budget and the resources necessary to re-design and re-develop these legacy app stacks to be a more modern & cloud native stack, most of them will not have that luxury. Often such redevelopment work are expensive and most importantly, time consuming and disruptive.

So, for most of these customers, the immediate tactical solution is to resort to a Hybrid cloud solution where the legacy “baggage” app stack live on a legacy data center and all newly developed apps will likely be developed as cloud native (designed and developed from ground up) on an enterprise friendly Public cloud system such as Microsoft Azure or VMware vCloud Air. An overarching IT operations management platform (industry buzz word “Cloud Management Platform”) will then manage both the customer owned (private) portion and the Public portion of the Hybrid cloud solution seamlessly (with caveats of course). I think this is what has been happening in 2015 and this may also explain the growth of legacy hardware vendor sales at the same time. Since I work for a fairly large global reseller, I’ve witnessed this increased hardware sales first hand from the traditional data center hardware vendor partners (HP, Cisco…etc.) through our business too which adds up. I believe this adoption of Hybrid cloud solutions will continue through out 2016 and possibly beyond for a good while, at least until such time that all legacy apps are eventually all phased out but that could be a long while away.

 

Summary

So there you have it. In my view, Public cloud will continue to grow but if you think that it will replace customer owned data center kit anytime soon, that’s probably unlikely. At least 2015 has proved that both Public cloud and Private cloud platforms (through the guise of Hybrid cloud) have grown together and my thoughts are that this will continue to be the case for a good while. Who knows, I may well be proven wrong and within 6 months, AWZ & Azure & Google Public clouds will devour all private cloud platforms and everybody would be happy on just Public cloud :-). But the common sense suggest otherwise. I can see lot more Hybrid cloud deployments in the immediate future (at least few years) using mainly Microsoft Azure and VMware vCloud Air platforms.  Based on technologies available today, these 2 in my view stand out as probably the best suited Public cloud platforms with a strong Hybrid cloud compatibility given their already popular presence in the enterprise data center (for hosting legacy apps efficiently) as well as each having a good overarching cloud management platform that customers can use to manage their Hybrid Cloud environments with.

 

Thoughts and comments are welcome….!!

 

3. VMware vSphere 6.x – vCenter Server Appliance Deployment

<- Index page – VMware vSphere 6.x Deployment Process

In the previous article, we deployed an external PSC appliance and replaced it’s default root CA cert with a cert from an existing enterprise CA, such that every time VMCA assigns a cert to either vCenter or in turn, ESXi servers, it will have the full enterprise CA certificate chain rather than just vSphere’s cert chain.

Note the below design notes related to the vCenter server deployment illustrated here

  • Similar to PSC, vCenter server will also be deployed using the VMware appliance (VCSA)
  • A single vCenter instance is often sufficient with most requirements given that VMware HA will protect it from hardware failures.

Lets now quickly look at a typical deployment of the vCenter server (appliance)

Note: Deployment of vCenter server using the VCSA is somewhat identical to the earlier illustrated deployement of PSC, in that its the same appliance being deployed, and instead of selecting PSC mode, we are selecting the vCenter Server mode this time.

  1. Download the VMware vCSA appliance ISO from VMware and mount the ISO image on you workstation / jump host and launch the vcsa-setup.html file found on the root of the ISO drive. 1
  2. Now click install.                                                      2
  3. Accept EULA and click next                                                   3. Ack
  4. You can deploy the appliance directly to an ESXi host or deploy through a vCenter. Provide your target server details here with credentials.  2. ESXi
  5. Type the appliance’s VM name & root password for the appliance’s Linux OS. Make a note as you’d need this later. 3. Appliance
  6. Select the appropriate deployment type. We are deploying an external vCenter server here for an external PSC.          4. VCSA01
  7. We are now connecting the vCenter VCSA to the previously deployed PSC instance and the SSO details we configured. 5. Connect to PSC
  8. Select the appropriate vCenter server VCSA appliance size, based on the intended workload of the vCenter.   6. Size
  9. Select the destination datastore to deploy the vCSA appliance on  to   7. Datastore
  10. Now select the vCenter database type. I’m using PostgreSQL here (built-in) as this will now likely be the preferred choice for many enterprise customers as its decent enough to scale up to 10,000 VMs and you don’t have to pay for an SQL server license. Those handful of customers who have an existing Oracle DB server can use Oracle here too. 8. DB
  11. Now provide the IP & DNS details. Ensue you provide a valid NTP server and check that the time syncs properly from this source.
    1. Note here that you need to manualy create the DNS server entry (if you hadn’t done this already) for the VCSA appliance and ensure it resolves the name correctly to the IP used here, before proceeding any further..!9. Config
  12. Verify the settings and proceed to start deploying the appliance. 10. Config
  13. Deployment progress and completion                                                                            12. Progress 13. Completion

SSL Certificate verifications & Updates (Important)!!

We’ve already updated the PSC’s default root certificate with a Enterprise CA signed root certificate in a previous step (Section “Optional – Replace the VMCA root certificate as explained here). So when you add the vCenter appliance to the PSC (which we’ve already performed earlier in this article, all the relevant certificates are supposed to be automatically created and allocated by the VMCA on to the vCenter. However I’ve seen issues with this so just to be on the safe side, I recommend we  follow the rest of the steps involved in the KB article 2111219, under section “Replacing VMCA of the Platform Services Controller with a Subordinate Certificate Authority Certificate” as follows

  1. Replacing the vSphere 6.0 Machine SSL certificate with a VMware Certificate Authority issued certificate (2112279) – On the vCenter Server Appliance
  2. Replacing the vSphere 6.0 Solution User certificates with VMware Certificate Authority issued certificates (2112281) – On the vCenter Server Appliance
  3. If you use Auto Deploy, may want to consider applying the fix mentioned in the KB article 2123631. Otherwise, go the next task
  4. Follow the VMware KB 2109074 and
    1. Follow the listed “Task 0 – Validating the sslTrust Anchors for the PSC and vCenter” – This need to be tested on both the PSC appliance as well as the vCenter appliance as instructed.
    2. If the certificated don’t match, also follow the rest of the tasks as indicated
    3. Validating this here can save you lots of headache down the line…!!

 

That’s pretty much it for the deployment of the VCSA appliance in vCenter mode rather than the PSC mode.

Adding ESXi Servers to the vCenter server

Important note: If you decide to add the ESXi nodes to the vCenter straight away, please we aware of the fact that if the Enterprise subordinate certificate that replaced the VMCA root certificate has been valid for less than 24 hours, you CANNOT add any ESXi hosts as this is by design. See the KB2123386 for more information. In most enterprise deployments where the Enterprise subordinate certificate would have been likely issued few days in advance of the actual PSC & VCSA deployment, this would be a non issue but if you are one of those where you’ve obtained the cert from your Enterprise CA less than 24 hours ago, you need to wait before you can add ESXi servers to the vCenter server.

 

That’s it. Now its the time to configure your vCenter server for AD authentication via the PSC and all other post install config tasks as required.

Cheers

Chan

2. VMware vSphere 6.x – Platform Service Controller Deployment

<- Index page – VMware vSphere 6.x Deployment Process

Following on from the previous article, lets now look at how we go about carrying out a typical enterprise deployment of vSphere 6 and first up is the deployment of PSC. (note that normally, the 1st thing to do is to deploy ESXi but since the ESXi deployment with 6.x is pretty much the same as its 2 previous iterations, I’m going to skip it, assuming that its somewhat mainstream knowledge now)

Given below are the main deployment steps involved in deploying the Platform Service Controller. Note the below notes regarding the PSC design being deployed here.

  • Single, external PSC appliance will be deployed with 2 vCenter server appliances associated with it (topology 2 of the recommended deployment topologies listed here by VMware) as this is likely going to be the most popular deployment model for most people.
  • Lot of people may wonder why no resiliency for PSC here. While PSC can be deployed behind a load balancer for HA, its a bit of an overkill, especially with vSphere 6.0 Update 1 which now supports pointing an existing vCenter Server to another PSC node if its in the same SSO domain. For more information, see this priceless article by William Lam @ VMware which also shows how you can automate this manual repointing if need be.

Lets take a look at the PSC appliance deployment steps

  1. Download the VMware vCSA appliance ISO from VMware and mount the ISO image on you workstation / jump host and launch the vcsa-setup.html file found on the root of the ISO drive. Since this has not specifically been mentioned, it should be noted that the PSC appliance deployment is part of the same vCenter Server Appliance (vCSA) but during the deployment, you specify you only want PSC services deployed) 1
  2. Now click install.                                                   2
  3. Accept EULA and next                                                  3. Ack
  4. You can deploy the appliance directly to an ESXi host or deploy through a vCenter. Provide your target server details here with credentials.  4. ESXi
  5. Type the appliance’s VM name & root password for the appliance’s Linux OS. Make a note as you’d need this later. 5. PSC01
  6. Select the appropriate deployment type. We are using the external PSC here.    6. External psc mode
  7. We are creating a new SSO domain here so provide the required details here. 7. SSO details
  8. Appliance size is not modifiable here as we’ve selected the PSC mode earlier (where the size is same for all).  8. PSC Appliance Size
  9. Select the destination datastore to deploy the PSC appliance on to.  9. PSC Disk Mode
  10. Now provide the IP & DNS details. Ensue you provide a valid NTP server and check that the time syncs properly from this source.
    1. Ensure the DNS entries are manually added to the AD for PSC before proceeding with this step as the PSC deployment may return errors if the FQDN cannot be resolved correctly.  10. PSC Network
  11. Review the deployment settings and click finish to proceed with the appliance deployment. 11. Summary
  12. Deployment progress and completion                                                                            12. Progress 13. Completion
  13. Once complete, ensure you can connect to the PSC web page using the URL http://<PSC FQDN>/websso 14 Verification
  14. You can also connect to the appliance configuration page using the port 5480 as is the case with most VMware products that ships as appliances. The URL is http://<FQDN of the PSC appliance>:5480 and the credentials are root and the password specified during deployment earlier. 15. ILO

Optional – Replace the VMCA root certificate

This is only required if you have an enterprise CA hierarchy already in place within your organisation, such as a Microsoft CA. However, if you are a WINTEL house, I would highly recommend that you deploy a Microsoft Enterprise CA using Windows Server as it is quite useful for many use cases, including automation tasks involved with XaaS platforms. (i.e. Running vRO workflows to create an Active Directory user cannot happen without an LDAPS connection for which the Domain Controllers need to have a valid certificate….etc.). So, if you have an Enterprise CA, you should make the PSC a subordinate certificate authority by replacing its default root cert with a valid cert from the Enterprise CA.
Note that this should ideally happen before deploying the vCenter server appliance, in order to keep the process simple.
  1. To do this, follow the steps listed out in this VMware KB 2111219, under the section “Replacing VMCA of the Platform Services Controller with a Subordinate Certificate Authority Certificate” (To be specific, if your deployment is greenfield and you are following my order of component deployment, which means vCenter server has not yet been deployed, ONLY follow the first 3 steps listed under the “Replacing VMCA of the Platform Service Controller with a subordinate Certificate Authority Certificate” section.  I’ve listed them below FYI.
    1. Creating a Microsoft Certificate Authority Template for SSL certificate creation in vSphere 6.0 (2112009)
    2. Configuring vSphere 6.0 VMware Certificate Authority as a subordinate Certificate Authority (2112016)
    3. Obtaining vSphere certificates from a Microsoft Certificate Authority (2112014)
  2. DO NOT follow the rest of the steps yet (unless you already have a vCenter server attached to the PSC) as they are NOT YET required.

 

PSC configuration

There is not much to configure on PSC at this stage as the SSO configuration and integration with AD will be done at a later stage, once the vCenter Server Appliances have also been deployed with the vCenter Server service.

 

There you have it. Your PSC appliance is now deployed and the default VMCA root certificate is also replaced with a subordinate certificate from your existing enterprise CA, so that your VMware vSphere components that receive a cert from VMCA will have the full organisational cert chain, all the way from the enterprise root CA cert, to the VMCA issued cert.

Next, we’ll look at the VCSA appliance deployment and configuration.

 

1. VMware vSphere 6.x – Deployment Architecture Key Notes

<-Home Page for VMware vSphere 6.x articles

First thing to do in a vSphere 6.x deployment is to understand the new deployment architecture options available on the vSphere 6.0 platform, which is somewhat different from the previous versions of vSphere. The below will highlight key information but is not a complete guide to all the changes..etc. For that I’d advise you to refer to the official vSphere documentation (found here)

Deployment Architecture

The deployment architecture for vSphere 6 is somewhat different from the legacy versions. I’m not going to document all of the architectural deference’s  (Please refer to the VMware product documentation for vSphere 6) but I will mention few of the key ones which I think are important, in a bullet point below.

  • vCenter Server – Consist of 2 key components
    • Platform Service Controller (PSC)
      • PSC include the following components
        • SSO
        • vSphere Licensing Server
        • VMCA – VMware Certificate Authority (a built in SSL certification authority to simply certificate provisioning to all VMware products including vCenter, ESXi, vRealize Automation….etc. The idea is you associate this to your existing enterprise root CA or a subordinate CA such as a Microsoft CA and point all VMware components at this.)
      • PSC can be deployed as an appliance or on a windows machine
    • vCenter Server
      • Appliance (vCSA) – Include the following services
        • vCenter Inventory server
        • PostgreSQL
        • vSphere Web Client
        • vSphere ESXi Dump collector
        • Syslog collector
        • Syslog Service
        • Auto Deploy
      • Windows version is also available.

Note: ESXi remains the same as before without any significant changes to its core architecture or the installation process.

Deployment Options

What’s in red below are the deployment options that I will be using in the subsequent sections to deploy vSphere 6 u1 as they represent the likely choices adopted during most of the enterprise deployments.

  • Platform Services Controller Deployment
    • Option 1 – Embedded with vCenter
      • Only suitable for small deployments
    • Option 2 – External – Dedicated separate deployment of PSC to which external vCenter(s) will connect to
      • Single PSC instance or a clustered PSC deployment consisting of multiple instances is supported
      • 2 options supported here.
        • Deploy an external PSC on Windows
        • Deploy an external PSC using the Linux based appliance (note that this option involves deploying the same vCSA appliance but during deployment, select the PSC mode rather than vCenter)
    • PSC need to be deployed first, followed by vCenter deployment as concurrent deployment of both are NOT supported!
  • vCenter Server Deployment – vCenter Deployment architecture consist of 2 choices
    • Windows deployment
      • Option 1: with a built in Postgre SQL
        • Only supported for a small – medium sized environment (20 hosts or 200VMs)
      • Option 2: with an external database system
        • Only external database system supported is Oracle (no more SQL databases for vCenter)
      • This effectively mean that you are now advised (indirectly, in my view) to always deploy the vCSA version as opposed to the Windows version of vCenter, especially since the feature parity between vCSA and Windows vCenter versions are now bridged
    • vCSA (appliance) deployment
      • Option 1: with a built in Postgre SQL DB
        • Supported for up to 1000 hosts and 10,000 VMs (This I reckon would be the most common deployment model now for vCSA due to the supported scalability and the simplicity)
      • Option 2: with an external database system
        • As with the Windows version, only Oracle is supported as an external DB system

PSC and vCenter deployment topologies

Certificate Concerns

  • VMCA is a complete Certificate Authority for all vSphere and related components where the vSphere related certificate issuing process is automated (happens automatically during adding vCenter servers to PSC & adding ESXi servers to vCenter).
  • For those who already have a Microsoft CA or a similar enterprise CA, the recommendation is to make the VMCA a subordinate CA so that all certificates allocated from VMCA to all vSphere components will have the full certificate chain, all the way from your Microsoft root CA(i.e. Microsoft Root CA cert->Subordinate CA cert->VMCA Root CA cert->Allocated cert, for the vSphere components).
  • In order to achieve this, the following steps need to be followed in the listed order.
    • Install the PSC / Deploy the PSC appliance first
    • Use an existing root / enterprise CA (i.e. Microsoft CA) to generate a subordinate CA certificate for the VMCA and replace the default VMCA root certificate on the PSC.
      • To achieve this, follow the VMware KB articles listed here.
      • Once the certificate replacement is complete on the PSC, do follow the “Task 0” outlined here to ensure that the vSphere service registrations with the VMware lookup service are also update. If not, you’ll have to follow the “Task 1 – 4” to manually update the sslTrust parameter value for the service registration using the ls_update_certs.py script (available on the PSC appliance). Validating this here can save you lots of headache down the line.
    • Now Install vCenter & point at the PSC for SSO (VMCA will automatically allocate appropriate certificates)
    • Add ESXi hosts (VMCA will automatically allocate appropriate certificates)

Key System Requirements

  • ESXi system requirements
    • Physical components
      • Need a minimum of 2 CPU cores per host
      • HCL compatibility (CPU released after sept 2006 only)
      • NX/SD bit enabled in BIOS
      • Intel VT-x enabled
      • SATA disks will be considered remote (meaning, no scratch partition on SATA)
    • Booting
      • Booting from UEFI is supported
      • But no auto deploy or network booting with UEFI
    • Local Storage
      • Disks
        • Recommended for booting from local disk is 5.2GB (for VMFS and the 4GB scratch partition)
        • Supported minimum is 1GB
          • Scratch partition created on another local disk or RAMDISK (/tmp/ramdisk) – Not recommended to be left on ramdisk for performance & memory optimisation
      • USB / SD
        • Installer DOES NOT create scratch on these drives
        • Either creates the scratch partition on another local disk or ramdisk
        • 4GB or larger recommended (though min supported is 1GB)
          • Additional space used for the core dump
        • 16GB or larger is highly recommended
          • Prolongs the flash cell life
  • vCenter Server System Requirements
    • Windows version
      • Must be connected to a domain
      • Hardware
        • PSC – 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM
    • Appliance version
      • Virtual Hardware
        • PSC- 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM

In the next post, we’ll look at the key deployment steps involved.

Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms

Win 2013

It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016). I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.

What’s new (Licensing wise)?

Here are the 2 key licensing changes.

  1. From Windows Server 2016 onwards, licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU
  2. A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server. Note that this is the minimum you can buy. If your server has additional cores, you need to buy additional licenses in packs of 2. So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL)

The most obvious change is the announcement of core based Windows server licensing. Yeah you read it correct…!! Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead. And they don’t stop there…. One might expect if they switch to a CPU core based licensing model, that those with smaller cores per CPU socket (4 or 6) would benefit from it, right? Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, would be the same as Windows server 2012 licensing revenue based on CPU sockets).

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license. In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.

Windows 2016 Server licensing cost comparison

 

The difference in cost to customers

Take the following scenario for example..

You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server. Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

  • List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)
  • Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37

The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows

  • Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00
  • After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60

Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices). But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs…. This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age. And now they are effectively taxing you on the core count too?? That’s possibly not short of a day light robbery scenario for those Microsoft customers.

FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.

  • Dual CPU server with 10 cores per CPU = 125% Increment
  • Dual CPU server with 12 cores per CPU = 150% Increment
  • Dual CPU server with 14 cores per CPU = 175% Increment
  • Dual CPU server with 18 cores per CPU = 225% Increment

Now this is based on todays technology. No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.

My Take

It is pretty obvious what MS is attempting to achieve here. With the ever increasing core count in CPUs, 2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs. We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).

I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.

Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway. Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud). If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.

In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming…. However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.. and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing. So possibly the time is right to bite the bullet and get on with embracing Linux??

So, my advise for customers is as follows

Tactical:

  1. Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc. The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
  2. Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term. So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do. Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have. You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.

Strategic:

  1. Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements. Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too)
  2. Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach. Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills. Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market. And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs. You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.

This article and the views expressed in it are mine alone.

Comments / Thoughts are welcome

Chan

P.S. This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan. I hope enough customer pressure would hopefully cause Microsoft to back off too….

VMware VSAN 2016 Future Annoucements

I’ve just attended the VMware Online Technology Forum (#VMwareOTF) and thought I’d share few really interesting announcements I noticed from there around the future of VSAN in 2016.

Good news, some really great enterprise scale features are being added to VSAN which is aimed for release in Q1 FY16 (along with the next vSphere upgrade release). Really good news… Beta is now live (Apply at http://vmware.com/go/vsan6beta) but unless you have All Flash VSAN hardware, unlikely to qualify.

Given below are the key highlight features likely going to be available with the next release

  • RAID-5 and RAID-6 over the network – Cool….!!

Future-RAID

 

  • Inline De-duplication / Compression along with Checksum capabilities coming

Future-Dedupe

 

 

  • VSAN for Object Storage (read more on Duncan Epping’s page here)

Future-Object Storage

 

  • VSAN for External Storage – Virtual Disks natively presented on external Storage

Future-External

 

Great news. Looks like an already great product is going to get even greater…!!

Slide credits go to VMware & the legendary Duncan Epping (@DuncanYB)…..

Cheers

Chan

 

 

FlexPod: The Joint Wonder From NetApp & Cisco (often with VMware vSphere on Top)

Logo

During attending the NetApp Insight 2015 in Berlin this week, I was reminded of the monumental growth in the number of customers who has been deploying FlexPods as their preferred converged solutions platform, which now celebrates its 5th year in operation. So I thought I’d do a very short post on it to give you my personal take of it and highlight some key materials.

FlexPod has been gaining lots of market traction as the preferred converged solution platform of choice for many customers of over the last 4 years. This has been due to the solid hardware technologies that underpins the solution offering (Cisco UCS compute + Cisco Nexus unified networking + NetApp FAS range of Clustered ONTAP SAN). Often, customers deploy FlexPod solutions together with VMware vSphere or MS Hyper-V on top (other hypervisors are also supported) which together, provide a complete, ready to go live, private and hybrid cloud platform that has been pre-validated to run most if not all typical enterprise data center workloads. I have been a strong advocate of FlexPod (simply due its technical superiority as a converged platform) for many of my customers since it’s inception.

Given below are some of the interesting FlexPod validated designs from Cisco & NetApp for Application performance, Cloud and automation, all in one place.

There are over 100+ FlexPod validated designs available in addition to the above, and they can all be found below

There is a certified, pre-validated, detailed FlexPod design and deployment guide for almost every datacentre workload and based on my 1st hand experience, FlexPod with VMware vSphere has always been a very popular choice amongst most customers as things just work together beautifully. Given the joint vendor support available, sourcing support from a single vendor for all tech in the solution is easy too. I also think customers prefer FlexPod over other similar converged solutions, say VBLOCK for example, due to the non prescriptive nature of FlexPod whereby you can tailor make a FlexPod solution that meet your need (a FlexPod partner can do this for a customer) which keeps the costs down too.

There are many FlexPod certified partners available who can size, design, sell and implement a FlexPod solution for a customer and my employer Insight is also one of them (in fact we were amongst the first few partners to get FlexPod partnership in the UK). So if you have any questions around the potential use of a FlexPod system, feel free to get in touch directly with me (contact details on the About Me section of this site) or through the Flexpod section of the Insight Direct UK web site.

Cheers

Chan

VMware VSAN – Why VSAN (for vSphere)?

I don’t really use my blog for product marketing or as a portal for adverts for random products. Its purely for me to blog about technologies I think are cool, awesome, and why I think they are really worth looking in to. On that note, I’ve always wanted to write a quick blog post about VMware VSAN when the first version of it was released with vSphere 5.5 a while back, because I was really excited about the technology and what it could do as it goes through the typical evolution cycle. But at the same time, I didn’t want to come across as I’m aiding the marketing of a brand new technology that I haven’t seen performing in real life. So I kinda reigned myself in a little from blogging about it as I wanted to sit back and wait to see how well it performs out in the real world and whether the architecturally sound technology would actually live up to its reputation & potential out in the field.

And guess what? It sure has lived up to it….. To be honest, even far better than I thought…. and with the most recent release (version 6.1 with ESX 6.1), its grown in its enterprise capabilities significantly as well. Latest features such as Stretched VSAN cluster (Adios Metro Clusters for vSphere), branch office solution (VSAN ROBO), VSAN replication, SMP FT support, Windows failover clustering support and Oracle RAC support….etc.. (more details here) have truly made it an enterprise storage solution for vSphere. And with the massive uptake of HCI solutions (Hyper-converged Infrastructure) by customers where VSAN is also a key part (think VMware Evo:RAIL) as well as with over 2500 global customer base who’re already using it for production use as their preferred storage solution of choice for vSphere (some of the key ones include Walmart, Air France, BAE, Adobe & a well known, global social media site), its about time I start writing something about it, just to give you my perspective…!!

I will aim to put a series of articles about VSAN, addressing number of different aspects of it over the course of next few weeks beginning with the obvious, below.

Why VSAN?

I’ve been a traditional SAN storage guy out in the field where I’ve worked hands on with key enterprise SAN storage tech from NetApp, EMC, HP….etc. for a long time. I’ve worked with these in all aspects, starting from presales , design, deployment and ongoing support. They are all very good I still like (some of) their tech and they sure do have a definite place in the Datacenter still. But they are a nightmare to size accurately, nightmare to design and implement and even a bigger nightmare to support when in production use, and that’s from a techie’s perspective. From a business / commercials perspective, not only are they expensive to buy upfront and maintain, but they typically come with an inevitable vendor lock-in that keeps you on the hook for 2-5 years where you have to buy substantially overpriced components for simple capacity upgrades. It is also very expensive to support (support costs are typically 17%-30% of the cost of SAN) and can be even more expensive when the originally bought support period runs out because the SAN vendor would typically make the support renewal cost more expensive than buying a new SAN, forcing you down to buy another. I suppose this is how the storage industry has always managed to pay for itself to keep innovating & survive but many customers and even startup SAN vendors are waking up to this trick and have now started to look at alternative offerings with a different commercial setup.

As an experienced storage guy, I can tell you first hand that the value of enterprise SAN storage is NOT really in the tin (disk drives or the blue / orange lights) but in fact in the software that manage those tin elements. Legacy storage vendors make you pay for that intelligence once, when you buy the SAN with its typical controllers (brains) where this software live and then every time you add additional disk shelves through guaranteed over priced shelf upgrades subsequently (ever heard your sales person tell you to estimate  all your storage needs for the next 5 years and buy it all up front with your SAN as its cheaper that way??). SAN vendors have been able to overcharge for subsequent shelf upgrades simply because they have managed to get the disk drive manufacturers to inject some special code (proprietary drivers) on to the disk firmware without which their SAN will not recognise the disks in its system so the customer cannot just go buy a similar disk elsewhere, even if that was the same disk made by the same end manufacturer (vendor lock-in). This overpricing is how the SAN vendor gets the customer to pay for their software intelligence again, every time you add additional capacity. I mean think about it, you’ve already paid for the damn SAN and its software IP when buying the SAN in the first place, so why pay for it again through paying over the odds when adding some more shelves to it (which after all, only contain disk drives with no intelligence) to expand its capacity?

To make it even more worse, the SAN vendor then comes up with a brand new version of the SAN in few years time (typically in the form of new software that cannot run on the current hardware you have, or a brand new SAN hardware platform all together). And your current SAN SW has now been made end of life therefore is not in support anymore (even though its working fine still). Now, you are stuck with an artificially created scenario (by the SAN vendor of course and forced upon you) where you cannot carry on running your existing version without paying a hefty support renewal fee (often artificially bloated by the vendor to be more expensive than a new HW SAN) nor can you simply  upgrade the software on the current hardware platform as the new SW is no longer supported by the vendor on your existing HW platform anymore. And transferring the software license you’ve already bought over to a new set of hardware (new SAN controllers) is strictly NOT allowed either.. (A carefully orchestrated and a very convenient scenario isn’t it for the SAN vendor?). Enters the phrase “SAN upgrade” which is a disruptive, labourous and worst of all an un-necessary expense where you are now indirectly forced by the vendor to pay again for the same software intelligence that you’ve already supposedly paid for, on a different set of hardware (new SAN). This is a really good business model for the SAN vendor and there’s also a whole eco system of organisations that benefit massively from this recurring (arguably never ending) procurement cycle, at the expense of the customer.

I see VMware VSAN as one of the biggest answers to this, for the vSphere shared storage use cases… With VMware VSAN, you have the freedom to choose your hardware including cheaper commodity hardware where you only pay the true cost of the disk drive based on its capacity without having to also pay a surcharge for the software intelligence every time you add a disk drive to your SAN. With VSAN which is licensed per CPU socket instead of per capacity unit (MB/GB/TB) so you pay for the software intelligence once irrespective of the actual capacity, during the initial procurement and that’s it. For every scale up requirement (adding capacity), you can simply just buy the disk drives at their true cost and add it to existing nodes. If you need to scale out (add more nodes), you then pay for the CPU sockets on the additional node(s). That to me sounds a whole lot fairer than the traditional SAN vendors model of charging for software upfront and then charging for it again indirectly during every capacity upgrade & SAN upgrade. Unlike traditional SAN vendors, every time a new version of the (VSAN) software comes out, you upgrade your ESXi version which is totally free of charge (if you have on going support) so you never have to pay for the software intelligence again (even when the ESXi host hardware replacement is required in future, you can reuse the VSAN licensing on the new HW nodes which is something traditional SAN vendors don’t let you do)

Typically, due to all these reasons, a legacy HW SAN would cost around $7 – $10 per GB whereas with VSAN, it tends to be around $1 – $2 mark, based on the data I’ve seen.

A simple example of upfront cost comparison is below. Note that show only shows the difference in upfront cost (CAPEX) and doesn’t take in to account ongoing cost differences which makes it even more appealing, due to the reasons explained above.

1

Enough of commercial & business justification as to why VSAN is better. Lets look at few of the technology & operational benefits.

  • Its flexible
    • VSAN being a software defined storage solution gives the customer the much needed flexibility where you are no longer tied in to a particular SAN vendor.
    • You no longer have to buy expensive EMC or NetApp disk shelves either as you can go procure commodity hardware to design your DC environment as you see fit
  • Its a technically better storage solution for vSphere
    • 4
    • Since VSAN drivers are built in to the ESXi kernel itself (Hypervisor), its directly in the IO path of VM’s which gives it superior performance with sub millisecond latency
    • Also tightly integration with other beloved vSphere features such as VMotion, HA, DRS and SVMotion as well as other VMware Software Defined Datacenter products such as vRealize Automation and vSphere replication.
  • Simple and efficient to manage
    • 2
    • Simple setup (few clicks) and policy based management, all defined within the same single pane of glass used for vSphere management
    • No need for expensive storage admins to manage and maintain a complex 3rd party array
    • If you know vSphere, you pretty much know VSAN already
    • No need to manage “LUNs” anymore – If you are a storage admin, you know what a nightmare this is, including the overhead of the management of the HW fabric too.
  • Large scale out capability
    • Support up to 64 nodes currently (64 node limitation is NOT from VSAN but from underlying vSphere. This will go up with future versions of vSphere)
    • 6,400 VMs / 7M iops / 8.8 petabytes
  • High availability
    • 3
    • Provide 99.999 for availability by default
    • No single point of failure due to its distributed architecture
    • Scaling out (adding nodes) or scaling up (adding disks) does not require downtime ever again.

This list can go on but before this whole post end up looking like a product advert on behalf of VMware, I’m going to stop as I’m sure you get my point here…

VMware VSAN to me,  now looks like a far more attractive proposition for vSphere private cloud solutions than having to buy a 3rd party SAN. Some of the new features that will be coming out in the future (NSX integration…etc.) will make it even a stronger candidate for most vSphere storage requirements going forward no doubt. As a technology its sound, backed by one of the most innovative companies on the planet, designed from ground up to work without the overhead of a file system (WAFL people might not like this too much, Sorry guys!) and I would keep a keen eye on how VMware VSAN would be eating in to lots of typical vSphere storage revenue from the legacy hardware SAN vendors over the next few years. Who knows, EMC may well have seen this coming some time ago which may have contributed towards the decision to merge with Dell too.

If you have a new vSphere storage requirement, my advice would be to strongly consider the use of VSAN as your first choice.

In the next post of this series, I will attempt to explain & summarise the VSAN sizing and design guidelines.

Cheers

Chan

VMworld Europe 2015 – Day 1 & 2 summary

The day 1 of the VMworld Europe began with the usual general session in the morning down at the hall 7.0. It was continuing the VMworld US theme of “Ready for any” during the European event too. It has become a standard for VMware to announce new products (or repeat announce new products following VMworld US) during this which, by now are somewhat public knowledge and this was no different this year. Also of special note was that they played a recorded video message from their new boss, Michael Dell (im sure everyone’s aware of the Dell’s acquisition of EMC on Monday) where he assured that VMware would remain as a publicly listed company and is a key part of the Dell-EMC enterprise.

To summarise the key message from the general session, VMware are planning to deliver 3 main messages

  • One Cloud – Seamless integration facilitated by VMware products between your private cloud / on-premise and various public clouds such as  AWS, Azure, Google…etc. Things like long distance VMotion, provided by vSphere 6,  Stretched L2 connectivity provided by NSX will make this a possibility
  • Any Application – VMware will build their SDDC product set to support containerisation of traditional (legacy client-server type) apps as well as new Cloud Native Apps going forward. Some work is already underway with the introduction of vSphere Integrated containers which I’d encourage you to have a look as well as VMware Photon platform
  • Any Device – Facilitate connectivity to any cloud / any application from Any end user device

Additional things announced also included vRealize Automation version 7.0 (urrently BETA, looks totally cool), VMware vCloud NFV platform availability for the Telco companies…etc.

Also worth mentioning that 2 large customers, Nova Media and Telefornica had their CEO’s on stage to explain how they managed to gain agility and market edge through the use to VMware’s SDDC technologies such as vSphere, NSX, vRealize Automation…etc. which was really good to see.

There were few other speakers at the general session such as Bill Fathers (about Cloud services – mainly vCloud Air) which I’m not going to mention in detail but sufficient to say that VMware’s overall product positioning and the corporate message to customers sound very catchy I think….and is very relevant to what’s going on out there too…

During the rest of the day 1, I attended a number of breakout sessions. 1st of which was the Converged Blueprints session presented by Kal De who was the VP or VMware R&D. This was based on the new vRealize Automation (version 7.0) and needless to say this was of total interest to me. So much so, straight after the event I managed to get in on the BETA programme for vRA 7.0 straight away (may be closed to public though now). Given below were some highlights from the session FYI

  • An improved, more integrated blueprint canvass where blueprints can be build through a drag and drop approach. Makes it a whole lot easier to build blueprints now.
  • Additional NSX integration to provide out of the box workflows….etc
  • Announcement of converged blueprints including IaaS, XaaS and Application services all in one blueprints…. Awesome…!!
  • Various other improvements & features….
  • Some potential (non committal of course) roadmap information also shared such as potential future ability to provision single Blueprint for multi-platform and multi-clouds, Blueprints to support container based Cloud Native Apps, Aligning vRA as a code with industry standards such as OASIS TOSCA, Open source HEAT…etc.

Afterwards, I had some time to spare, so I went to the Solutions Exchange and had a browse around at as many vendor stands as possible. Most of the key vendors were there with their usual tech, EMC (or Dell now??) and the VCE stands being the loudest (no surprise there then??). However I want to mention the following 2 new, VMware partner start-ups I came across that really caught my attention. These were both new to me and I really liked what both of them had to offer.

  • RuneCast:
    • This is a newly formed Czech start-up and basically what they do is hoover in all VMware KB articles with configuration best practises and bug fix instructions and assessing your vSphere environment components against these information to warn you of the configuration drift from the recommended state. Almost like a best practise analyser…. Best part is the cost is fairly cheap at $25 per CPU per month (list price which usually get heavily discounted)… Really simple, but a good idea made more appealing due to the low cost. Check it out…!!
  • Velvica:
    • These guys provide a billing and management platform to cloud service providers (especially small to medium scale cloud service providers) so they don’t have to build such capabilities ground up on their own. If you are a CSP, all that is required is for you to have VMware vCloud Director instance and you can simply point Velvica portal at the vCD to present a service serviceable public Cloud portal to customers. Can also be used within an organisation internally if you have a private cloud. Again, I hadn’t come across this before and I thought their offering helps many small CSP’s to get to market quicker while providing a good platform for corporate & enterprise customers to introduce utility computing internally without much initial delay or cost.

During the rest of the Day 1, I attended few more breakout sessions such as the vCenter 6.0 HA deepdive. While this was not as good a session as I had expected, I did learn few little things such as prior to vSphere 6 u1, vCenter database NOT being officially supported on SQL AAG (Always on Availability Groups), Platform Service Controller being clusterable without a load balancer (require manual failover tasks of course) as well as a tech preview of the native HA capability going to be available for vCenter (no need for vCenter heartbeat or any 3rd party products anymore) that looked pretty cool.

On day 2, there was another general session on the morning where VMware discussed the strategy and new announcement on EYUC, security & SDN…etc. with various BU leaders on stage. VMware CEO Pat Gelsinger also came on stage to discuss future direction of the organization (though I suspect most of this may be influenced by Dell if they remain a part of Dell??).

Following on from the general session on day 2, I attended a breakout session about NSX Micro Segmentation automation deep dive which was presented by 2 VMware Professional Services team members from US. This was really cool as they showed a live demo of how to create custom vRO workflows to perform NSX operations and how they can be used to automate NSX operations. While they didn’t mention this, it should be noted that these workflows can naturally be accessed from vRealize Automation where performing routine IT tasks can now be made available through a pre-configured service blueprint that users (IT staff themselves) can consume via the vRA self serviceable portal.

While I had few other breakout sessions booked for afterwards, unfortunately I was not able to attend these due to a last minute meeting I had to attend onsite at VMworld, with a customer to discuss a specific requirement they have.

I will be spending the rest of the afternoon looking at more vendor stands at Solutions Exchange until the VMware official party begins where im planning to catch up with few clients as well as some good friends…

Will provide an update if I come across any other interesting vendors from the Solutions Exchange in tomorrow’s summary

Cheers

Chan

 

 

VMworld Europe 2015 – Partner Day (PEX)

Quick post about the VMworld Europe day 1 (PEX day)….!! Was meaning to get this post out yesterday but there are too many distractions when you attend VMworld, let me tell ya….! 🙂

I arrived in Barcelona on Sunday and had already collected the access pass on Sunday evening itself. As such, I arrived at the venue on the Partner day on Monday around 9am and the venue was fairly busy with various VMware employees and partners.

As for my schedule for the day, I attended a VSAN deepdive session in the morning, presented by non other than Mr VSAN himself (Simon Todd @ VMware) which was fairly good. To be honest, most of the content was the same as the session he presented few weeks ago at VMware SDDC boot camp in London which I also attended. Some of the interesting points covered include

  • Oracle RAC / Exchange DAG / SQL Always on Availability Groups are not supported on VSAN with the latest version (6.1)
  • Always use pass through rather than RAID 0 on VSAN ready nodes as this gives full visibility of the disk characteristics such as SMART and removal of disks from disk groups causing less downtime with passthrough rather than RAID which makes sense.
  • Paying attention to SAS expander cards and lane allocation if you do custom node builds for VSAN nodes (rather than using pew-configured VSAN ready nodes). For example, a 12g SAS expander card can only access 8 PCI lanes where in an extreme case, can be saturated so its better to have 2 x SAS expander cards to share the workload of 8 channels each
  • Keep SATA to SSD ratio small in disk groups where possible to distribute the workload and benefit from maximum aggregate IOPS performance (from the SSD layer)
  • Stretched VSAN (possible with VSAN 6.1) features and some pre-reqs such as less than 5ms latency requirements over 10/20/40gbps links between sites, multicast requirements, and the 500ms latency requirement between main site and the offsite witness.

Following on from this session, I attended the SDDC Assess, Design & Deploy session presented by Gary Blake (Senior Solutions Architect). That was all about what his team doing to help standardise the deployment design & deployment process of the Software Defined Data Center components. I did find out about something really interesting during this session about VMware Validated Designs (VVD). VVD is something VMware are planning to come out with which would be kind of similar to CVD (Cisco Validated Design Document if you are familiar with FlexPod). A VVD will literally provide all the information required for a customer / partner / anyone to Design & Implement a VMware validated Software Defined Data Center using the SDDC product portfolio. This has been long overdue in my view and as a Vmware partner and a long time customer, would really welcome this. No full VVD’s are yet released to the public yet, but you can join the community page to be kept up to date. Refer to the following 3 links

I then attended a separate, offsite roundtable discussion at a nearby hotel with a key number of NSX business Unit leaders to have an open chat about everything NSX. That was really good as they shared some key NSX related information and also discussed some interesting points. Few of the key ones are listed below.

  • 700+ production customers being on board so far with NSX
  • Some really large customers running their production workload on NSX (a major sportswear manufacturer running their entire public facing web systems on NSX)
  • East-West traffic security requirements driving lots of NSX sales opportunities, specifically with VDI.
  • Additional, more focused NSX training would soon be available such as design and deployment, Troubleshooting…etc
  • It was also mentioned that customers can acquire NSX with limited features for a cheaper price (restricted EULA) if you only need reduced capabilities (for example, if you only need edge gateway services). I’m not sure on how to order these though and would suggest speaking to your VMware account manager in the first instance.
  • Also discussed the potential new pricing options (nothing set in place yet..!!) in order to make NSX more affordable for small to medium size customers. Price is a clear issue for many small customers when it comes to NSX and if they do something to make it more affordable to smaller customers, that would no doubt be really well received. (This was an idea the attendees put forward and NSBU was happy to acknowledge & looking in to doing something about it)
  • Also discussed some roadmap information such as potential evolution of NSX in to providing firewall & security features out on public clouds as well as the private clouds.

Overall, the NSX roundtable discussions were really positive and it finally seems like the NSBU is slowly releasing the tight grip they had around the NSX release and be willing to engage more with the channel to help promote the product rather than working with only a handful of specialist partners. Also, it was really encouraging to hear about its adoption status so far as I’ve always been an early advocate of NSX due to the potential I saw during early releases. So go NSX….!!!

Overall, I thought the PEX day was ok. Nothing to get too excited about in terms of the breakout sessions…etc, with the highlight being the roundtable with the NSBU staff.

Following on from the discussion with the NSBU, I left the venue to go back to the hotel to meet up with few colleagues of mine and we then headed off to a nice restaurant on the Barcelona beach front called Shoko (http://shoko.biz/) to get some dinner & plan the rest of the week… This is the 2nd time we’ve hit this restaurant and I’d highly recommend anyone to go check it out if you are in town.

Unfortunately, I cannot quite recollect much about what happened after that point… 🙂

Post about the official (customer facing) opening day of the VMworld event is to follow….!!

Cheers

Chan