1. VMware vSphere 6.x – Deployment Architecture Key Notes

<-Home Page for VMware vSphere 6.x articles

First thing to do in a vSphere 6.x deployment is to understand the new deployment architecture options available on the vSphere 6.0 platform, which is somewhat different from the previous versions of vSphere. The below will highlight key information but is not a complete guide to all the changes..etc. For that I’d advise you to refer to the official vSphere documentation (found here)

Deployment Architecture

The deployment architecture for vSphere 6 is somewhat different from the legacy versions. I’m not going to document all of the architectural deference’s  (Please refer to the VMware product documentation for vSphere 6) but I will mention few of the key ones which I think are important, in a bullet point below.

  • vCenter Server – Consist of 2 key components
    • Platform Service Controller (PSC)
      • PSC include the following components
        • SSO
        • vSphere Licensing Server
        • VMCA – VMware Certificate Authority (a built in SSL certification authority to simply certificate provisioning to all VMware products including vCenter, ESXi, vRealize Automation….etc. The idea is you associate this to your existing enterprise root CA or a subordinate CA such as a Microsoft CA and point all VMware components at this.)
      • PSC can be deployed as an appliance or on a windows machine
    • vCenter Server
      • Appliance (vCSA) – Include the following services
        • vCenter Inventory server
        • PostgreSQL
        • vSphere Web Client
        • vSphere ESXi Dump collector
        • Syslog collector
        • Syslog Service
        • Auto Deploy
      • Windows version is also available.

Note: ESXi remains the same as before without any significant changes to its core architecture or the installation process.

Deployment Options

What’s in red below are the deployment options that I will be using in the subsequent sections to deploy vSphere 6 u1 as they represent the likely choices adopted during most of the enterprise deployments.

  • Platform Services Controller Deployment
    • Option 1 – Embedded with vCenter
      • Only suitable for small deployments
    • Option 2 – External – Dedicated separate deployment of PSC to which external vCenter(s) will connect to
      • Single PSC instance or a clustered PSC deployment consisting of multiple instances is supported
      • 2 options supported here.
        • Deploy an external PSC on Windows
        • Deploy an external PSC using the Linux based appliance (note that this option involves deploying the same vCSA appliance but during deployment, select the PSC mode rather than vCenter)
    • PSC need to be deployed first, followed by vCenter deployment as concurrent deployment of both are NOT supported!
  • vCenter Server Deployment – vCenter Deployment architecture consist of 2 choices
    • Windows deployment
      • Option 1: with a built in Postgre SQL
        • Only supported for a small – medium sized environment (20 hosts or 200VMs)
      • Option 2: with an external database system
        • Only external database system supported is Oracle (no more SQL databases for vCenter)
      • This effectively mean that you are now advised (indirectly, in my view) to always deploy the vCSA version as opposed to the Windows version of vCenter, especially since the feature parity between vCSA and Windows vCenter versions are now bridged
    • vCSA (appliance) deployment
      • Option 1: with a built in Postgre SQL DB
        • Supported for up to 1000 hosts and 10,000 VMs (This I reckon would be the most common deployment model now for vCSA due to the supported scalability and the simplicity)
      • Option 2: with an external database system
        • As with the Windows version, only Oracle is supported as an external DB system

PSC and vCenter deployment topologies

Certificate Concerns

  • VMCA is a complete Certificate Authority for all vSphere and related components where the vSphere related certificate issuing process is automated (happens automatically during adding vCenter servers to PSC & adding ESXi servers to vCenter).
  • For those who already have a Microsoft CA or a similar enterprise CA, the recommendation is to make the VMCA a subordinate CA so that all certificates allocated from VMCA to all vSphere components will have the full certificate chain, all the way from your Microsoft root CA(i.e. Microsoft Root CA cert->Subordinate CA cert->VMCA Root CA cert->Allocated cert, for the vSphere components).
  • In order to achieve this, the following steps need to be followed in the listed order.
    • Install the PSC / Deploy the PSC appliance first
    • Use an existing root / enterprise CA (i.e. Microsoft CA) to generate a subordinate CA certificate for the VMCA and replace the default VMCA root certificate on the PSC.
      • To achieve this, follow the VMware KB articles listed here.
      • Once the certificate replacement is complete on the PSC, do follow the “Task 0” outlined here to ensure that the vSphere service registrations with the VMware lookup service are also update. If not, you’ll have to follow the “Task 1 – 4” to manually update the sslTrust parameter value for the service registration using the ls_update_certs.py script (available on the PSC appliance). Validating this here can save you lots of headache down the line.
    • Now Install vCenter & point at the PSC for SSO (VMCA will automatically allocate appropriate certificates)
    • Add ESXi hosts (VMCA will automatically allocate appropriate certificates)

Key System Requirements

  • ESXi system requirements
    • Physical components
      • Need a minimum of 2 CPU cores per host
      • HCL compatibility (CPU released after sept 2006 only)
      • NX/SD bit enabled in BIOS
      • Intel VT-x enabled
      • SATA disks will be considered remote (meaning, no scratch partition on SATA)
    • Booting
      • Booting from UEFI is supported
      • But no auto deploy or network booting with UEFI
    • Local Storage
      • Disks
        • Recommended for booting from local disk is 5.2GB (for VMFS and the 4GB scratch partition)
        • Supported minimum is 1GB
          • Scratch partition created on another local disk or RAMDISK (/tmp/ramdisk) – Not recommended to be left on ramdisk for performance & memory optimisation
      • USB / SD
        • Installer DOES NOT create scratch on these drives
        • Either creates the scratch partition on another local disk or ramdisk
        • 4GB or larger recommended (though min supported is 1GB)
          • Additional space used for the core dump
        • 16GB or larger is highly recommended
          • Prolongs the flash cell life
  • vCenter Server System Requirements
    • Windows version
      • Must be connected to a domain
      • Hardware
        • PSC – 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM
    • Appliance version
      • Virtual Hardware
        • PSC- 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM

In the next post, we’ll look at the key deployment steps involved.

Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms

Win 2013

It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016). I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.

What’s new (Licensing wise)?

Here are the 2 key licensing changes.

  1. From Windows Server 2016 onwards, licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU
  2. A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server. Note that this is the minimum you can buy. If your server has additional cores, you need to buy additional licenses in packs of 2. So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL)

The most obvious change is the announcement of core based Windows server licensing. Yeah you read it correct…!! Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead. And they don’t stop there…. One might expect if they switch to a CPU core based licensing model, that those with smaller cores per CPU socket (4 or 6) would benefit from it, right? Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, would be the same as Windows server 2012 licensing revenue based on CPU sockets).

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license. In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.

Windows 2016 Server licensing cost comparison


The difference in cost to customers

Take the following scenario for example..

You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server. Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

  • List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)
  • Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37

The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows

  • Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00
  • After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60

Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices). But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs…. This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age. And now they are effectively taxing you on the core count too?? That’s possibly not short of a day light robbery scenario for those Microsoft customers.

FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.

  • Dual CPU server with 10 cores per CPU = 125% Increment
  • Dual CPU server with 12 cores per CPU = 150% Increment
  • Dual CPU server with 14 cores per CPU = 175% Increment
  • Dual CPU server with 18 cores per CPU = 225% Increment

Now this is based on todays technology. No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.

My Take

It is pretty obvious what MS is attempting to achieve here. With the ever increasing core count in CPUs, 2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs. We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).

I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.

Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway. Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud). If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.

In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming…. However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.. and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing. So possibly the time is right to bite the bullet and get on with embracing Linux??

So, my advise for customers is as follows


  1. Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc. The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
  2. Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term. So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do. Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have. You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.


  1. Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements. Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too)
  2. Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach. Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills. Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market. And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs. You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.

This article and the views expressed in it are mine alone.

Comments / Thoughts are welcome


P.S. This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan. I hope enough customer pressure would hopefully cause Microsoft to back off too….

VMware VSAN 2016 Future Annoucements

I’ve just attended the VMware Online Technology Forum (#VMwareOTF) and thought I’d share few really interesting announcements I noticed from there around the future of VSAN in 2016.

Good news, some really great enterprise scale features are being added to VSAN which is aimed for release in Q1 FY16 (along with the next vSphere upgrade release). Really good news… Beta is now live (Apply at http://vmware.com/go/vsan6beta) but unless you have All Flash VSAN hardware, unlikely to qualify.

Given below are the key highlight features likely going to be available with the next release

  • RAID-5 and RAID-6 over the network – Cool….!!



  • Inline De-duplication / Compression along with Checksum capabilities coming




  • VSAN for Object Storage (read more on Duncan Epping’s page here)

Future-Object Storage


  • VSAN for External Storage – Virtual Disks natively presented on external Storage



Great news. Looks like an already great product is going to get even greater…!!

Slide credits go to VMware & the legendary Duncan Epping (@DuncanYB)…..





FlexPod: The Joint Wonder From NetApp & Cisco (often with VMware vSphere on Top)


During attending the NetApp Insight 2015 in Berlin this week, I was reminded of the monumental growth in the number of customers who has been deploying FlexPods as their preferred converged solutions platform, which now celebrates its 5th year in operation. So I thought I’d do a very short post on it to give you my personal take of it and highlight some key materials.

FlexPod has been gaining lots of market traction as the preferred converged solution platform of choice for many customers of over the last 4 years. This has been due to the solid hardware technologies that underpins the solution offering (Cisco UCS compute + Cisco Nexus unified networking + NetApp FAS range of Clustered ONTAP SAN). Often, customers deploy FlexPod solutions together with VMware vSphere or MS Hyper-V on top (other hypervisors are also supported) which together, provide a complete, ready to go live, private and hybrid cloud platform that has been pre-validated to run most if not all typical enterprise data center workloads. I have been a strong advocate of FlexPod (simply due its technical superiority as a converged platform) for many of my customers since it’s inception.

Given below are some of the interesting FlexPod validated designs from Cisco & NetApp for Application performance, Cloud and automation, all in one place.

There are over 100+ FlexPod validated designs available in addition to the above, and they can all be found below

There is a certified, pre-validated, detailed FlexPod design and deployment guide for almost every datacentre workload and based on my 1st hand experience, FlexPod with VMware vSphere has always been a very popular choice amongst most customers as things just work together beautifully. Given the joint vendor support available, sourcing support from a single vendor for all tech in the solution is easy too. I also think customers prefer FlexPod over other similar converged solutions, say VBLOCK for example, due to the non prescriptive nature of FlexPod whereby you can tailor make a FlexPod solution that meet your need (a FlexPod partner can do this for a customer) which keeps the costs down too.

There are many FlexPod certified partners available who can size, design, sell and implement a FlexPod solution for a customer and my employer Insight is also one of them (in fact we were amongst the first few partners to get FlexPod partnership in the UK). So if you have any questions around the potential use of a FlexPod system, feel free to get in touch directly with me (contact details on the About Me section of this site) or through the Flexpod section of the Insight Direct UK web site.



VMware VSAN – Why VSAN (for vSphere)?

I don’t really use my blog for product marketing or as a portal for adverts for random products. Its purely for me to blog about technologies I think are cool, awesome, and why I think they are really worth looking in to. On that note, I’ve always wanted to write a quick blog post about VMware VSAN when the first version of it was released with vSphere 5.5 a while back, because I was really excited about the technology and what it could do as it goes through the typical evolution cycle. But at the same time, I didn’t want to come across as I’m aiding the marketing of a brand new technology that I haven’t seen performing in real life. So I kinda reigned myself in a little from blogging about it as I wanted to sit back and wait to see how well it performs out in the real world and whether the architecturally sound technology would actually live up to its reputation & potential out in the field.

And guess what? It sure has lived up to it….. To be honest, even far better than I thought…. and with the most recent release (version 6.1 with ESX 6.1), its grown in its enterprise capabilities significantly as well. Latest features such as Stretched VSAN cluster (Adios Metro Clusters for vSphere), branch office solution (VSAN ROBO), VSAN replication, SMP FT support, Windows failover clustering support and Oracle RAC support….etc.. (more details here) have truly made it an enterprise storage solution for vSphere. And with the massive uptake of HCI solutions (Hyper-converged Infrastructure) by customers where VSAN is also a key part (think VMware Evo:RAIL) as well as with over 2500 global customer base who’re already using it for production use as their preferred storage solution of choice for vSphere (some of the key ones include Walmart, Air France, BAE, Adobe & a well known, global social media site), its about time I start writing something about it, just to give you my perspective…!!

I will aim to put a series of articles about VSAN, addressing number of different aspects of it over the course of next few weeks beginning with the obvious, below.


I’ve been a traditional SAN storage guy out in the field where I’ve worked hands on with key enterprise SAN storage tech from NetApp, EMC, HP….etc. for a long time. I’ve worked with these in all aspects, starting from presales , design, deployment and ongoing support. They are all very good I still like (some of) their tech and they sure do have a definite place in the Datacenter still. But they are a nightmare to size accurately, nightmare to design and implement and even a bigger nightmare to support when in production use, and that’s from a techie’s perspective. From a business / commercials perspective, not only are they expensive to buy upfront and maintain, but they typically come with an inevitable vendor lock-in that keeps you on the hook for 2-5 years where you have to buy substantially overpriced components for simple capacity upgrades. It is also very expensive to support (support costs are typically 17%-30% of the cost of SAN) and can be even more expensive when the originally bought support period runs out because the SAN vendor would typically make the support renewal cost more expensive than buying a new SAN, forcing you down to buy another. I suppose this is how the storage industry has always managed to pay for itself to keep innovating & survive but many customers and even startup SAN vendors are waking up to this trick and have now started to look at alternative offerings with a different commercial setup.

As an experienced storage guy, I can tell you first hand that the value of enterprise SAN storage is NOT really in the tin (disk drives or the blue / orange lights) but in fact in the software that manage those tin elements. Legacy storage vendors make you pay for that intelligence once, when you buy the SAN with its typical controllers (brains) where this software live and then every time you add additional disk shelves through guaranteed over priced shelf upgrades subsequently (ever heard your sales person tell you to estimate  all your storage needs for the next 5 years and buy it all up front with your SAN as its cheaper that way??). SAN vendors have been able to overcharge for subsequent shelf upgrades simply because they have managed to get the disk drive manufacturers to inject some special code (proprietary drivers) on to the disk firmware without which their SAN will not recognise the disks in its system so the customer cannot just go buy a similar disk elsewhere, even if that was the same disk made by the same end manufacturer (vendor lock-in). This overpricing is how the SAN vendor gets the customer to pay for their software intelligence again, every time you add additional capacity. I mean think about it, you’ve already paid for the damn SAN and its software IP when buying the SAN in the first place, so why pay for it again through paying over the odds when adding some more shelves to it (which after all, only contain disk drives with no intelligence) to expand its capacity?

To make it even more worse, the SAN vendor then comes up with a brand new version of the SAN in few years time (typically in the form of new software that cannot run on the current hardware you have, or a brand new SAN hardware platform all together). And your current SAN SW has now been made end of life therefore is not in support anymore (even though its working fine still). Now, you are stuck with an artificially created scenario (by the SAN vendor of course and forced upon you) where you cannot carry on running your existing version without paying a hefty support renewal fee (often artificially bloated by the vendor to be more expensive than a new HW SAN) nor can you simply  upgrade the software on the current hardware platform as the new SW is no longer supported by the vendor on your existing HW platform anymore. And transferring the software license you’ve already bought over to a new set of hardware (new SAN controllers) is strictly NOT allowed either.. (A carefully orchestrated and a very convenient scenario isn’t it for the SAN vendor?). Enters the phrase “SAN upgrade” which is a disruptive, labourous and worst of all an un-necessary expense where you are now indirectly forced by the vendor to pay again for the same software intelligence that you’ve already supposedly paid for, on a different set of hardware (new SAN). This is a really good business model for the SAN vendor and there’s also a whole eco system of organisations that benefit massively from this recurring (arguably never ending) procurement cycle, at the expense of the customer.

I see VMware VSAN as one of the biggest answers to this, for the vSphere shared storage use cases… With VMware VSAN, you have the freedom to choose your hardware including cheaper commodity hardware where you only pay the true cost of the disk drive based on its capacity without having to also pay a surcharge for the software intelligence every time you add a disk drive to your SAN. With VSAN which is licensed per CPU socket instead of per capacity unit (MB/GB/TB) so you pay for the software intelligence once irrespective of the actual capacity, during the initial procurement and that’s it. For every scale up requirement (adding capacity), you can simply just buy the disk drives at their true cost and add it to existing nodes. If you need to scale out (add more nodes), you then pay for the CPU sockets on the additional node(s). That to me sounds a whole lot fairer than the traditional SAN vendors model of charging for software upfront and then charging for it again indirectly during every capacity upgrade & SAN upgrade. Unlike traditional SAN vendors, every time a new version of the (VSAN) software comes out, you upgrade your ESXi version which is totally free of charge (if you have on going support) so you never have to pay for the software intelligence again (even when the ESXi host hardware replacement is required in future, you can reuse the VSAN licensing on the new HW nodes which is something traditional SAN vendors don’t let you do)

Typically, due to all these reasons, a legacy HW SAN would cost around $7 – $10 per GB whereas with VSAN, it tends to be around $1 – $2 mark, based on the data I’ve seen.

A simple example of upfront cost comparison is below. Note that show only shows the difference in upfront cost (CAPEX) and doesn’t take in to account ongoing cost differences which makes it even more appealing, due to the reasons explained above.


Enough of commercial & business justification as to why VSAN is better. Lets look at few of the technology & operational benefits.

  • Its flexible
    • VSAN being a software defined storage solution gives the customer the much needed flexibility where you are no longer tied in to a particular SAN vendor.
    • You no longer have to buy expensive EMC or NetApp disk shelves either as you can go procure commodity hardware to design your DC environment as you see fit
  • Its a technically better storage solution for vSphere
    • 4
    • Since VSAN drivers are built in to the ESXi kernel itself (Hypervisor), its directly in the IO path of VM’s which gives it superior performance with sub millisecond latency
    • Also tightly integration with other beloved vSphere features such as VMotion, HA, DRS and SVMotion as well as other VMware Software Defined Datacenter products such as vRealize Automation and vSphere replication.
  • Simple and efficient to manage
    • 2
    • Simple setup (few clicks) and policy based management, all defined within the same single pane of glass used for vSphere management
    • No need for expensive storage admins to manage and maintain a complex 3rd party array
    • If you know vSphere, you pretty much know VSAN already
    • No need to manage “LUNs” anymore – If you are a storage admin, you know what a nightmare this is, including the overhead of the management of the HW fabric too.
  • Large scale out capability
    • Support up to 64 nodes currently (64 node limitation is NOT from VSAN but from underlying vSphere. This will go up with future versions of vSphere)
    • 6,400 VMs / 7M iops / 8.8 petabytes
  • High availability
    • 3
    • Provide 99.999 for availability by default
    • No single point of failure due to its distributed architecture
    • Scaling out (adding nodes) or scaling up (adding disks) does not require downtime ever again.

This list can go on but before this whole post end up looking like a product advert on behalf of VMware, I’m going to stop as I’m sure you get my point here…

VMware VSAN to me,  now looks like a far more attractive proposition for vSphere private cloud solutions than having to buy a 3rd party SAN. Some of the new features that will be coming out in the future (NSX integration…etc.) will make it even a stronger candidate for most vSphere storage requirements going forward no doubt. As a technology its sound, backed by one of the most innovative companies on the planet, designed from ground up to work without the overhead of a file system (WAFL people might not like this too much, Sorry guys!) and I would keep a keen eye on how VMware VSAN would be eating in to lots of typical vSphere storage revenue from the legacy hardware SAN vendors over the next few years. Who knows, EMC may well have seen this coming some time ago which may have contributed towards the decision to merge with Dell too.

If you have a new vSphere storage requirement, my advice would be to strongly consider the use of VSAN as your first choice.

In the next post of this series, I will attempt to explain & summarise the VSAN sizing and design guidelines.



VMworld Europe 2015 – Day 1 & 2 summary

The day 1 of the VMworld Europe began with the usual general session in the morning down at the hall 7.0. It was continuing the VMworld US theme of “Ready for any” during the European event too. It has become a standard for VMware to announce new products (or repeat announce new products following VMworld US) during this which, by now are somewhat public knowledge and this was no different this year. Also of special note was that they played a recorded video message from their new boss, Michael Dell (im sure everyone’s aware of the Dell’s acquisition of EMC on Monday) where he assured that VMware would remain as a publicly listed company and is a key part of the Dell-EMC enterprise.

To summarise the key message from the general session, VMware are planning to deliver 3 main messages

  • One Cloud – Seamless integration facilitated by VMware products between your private cloud / on-premise and various public clouds such as  AWS, Azure, Google…etc. Things like long distance VMotion, provided by vSphere 6,  Stretched L2 connectivity provided by NSX will make this a possibility
  • Any Application – VMware will build their SDDC product set to support containerisation of traditional (legacy client-server type) apps as well as new Cloud Native Apps going forward. Some work is already underway with the introduction of vSphere Integrated containers which I’d encourage you to have a look as well as VMware Photon platform
  • Any Device – Facilitate connectivity to any cloud / any application from Any end user device

Additional things announced also included vRealize Automation version 7.0 (urrently BETA, looks totally cool), VMware vCloud NFV platform availability for the Telco companies…etc.

Also worth mentioning that 2 large customers, Nova Media and Telefornica had their CEO’s on stage to explain how they managed to gain agility and market edge through the use to VMware’s SDDC technologies such as vSphere, NSX, vRealize Automation…etc. which was really good to see.

There were few other speakers at the general session such as Bill Fathers (about Cloud services – mainly vCloud Air) which I’m not going to mention in detail but sufficient to say that VMware’s overall product positioning and the corporate message to customers sound very catchy I think….and is very relevant to what’s going on out there too…

During the rest of the day 1, I attended a number of breakout sessions. 1st of which was the Converged Blueprints session presented by Kal De who was the VP or VMware R&D. This was based on the new vRealize Automation (version 7.0) and needless to say this was of total interest to me. So much so, straight after the event I managed to get in on the BETA programme for vRA 7.0 straight away (may be closed to public though now). Given below were some highlights from the session FYI

  • An improved, more integrated blueprint canvass where blueprints can be build through a drag and drop approach. Makes it a whole lot easier to build blueprints now.
  • Additional NSX integration to provide out of the box workflows….etc
  • Announcement of converged blueprints including IaaS, XaaS and Application services all in one blueprints…. Awesome…!!
  • Various other improvements & features….
  • Some potential (non committal of course) roadmap information also shared such as potential future ability to provision single Blueprint for multi-platform and multi-clouds, Blueprints to support container based Cloud Native Apps, Aligning vRA as a code with industry standards such as OASIS TOSCA, Open source HEAT…etc.

Afterwards, I had some time to spare, so I went to the Solutions Exchange and had a browse around at as many vendor stands as possible. Most of the key vendors were there with their usual tech, EMC (or Dell now??) and the VCE stands being the loudest (no surprise there then??). However I want to mention the following 2 new, VMware partner start-ups I came across that really caught my attention. These were both new to me and I really liked what both of them had to offer.

  • RuneCast:
    • This is a newly formed Czech start-up and basically what they do is hoover in all VMware KB articles with configuration best practises and bug fix instructions and assessing your vSphere environment components against these information to warn you of the configuration drift from the recommended state. Almost like a best practise analyser…. Best part is the cost is fairly cheap at $25 per CPU per month (list price which usually get heavily discounted)… Really simple, but a good idea made more appealing due to the low cost. Check it out…!!
  • Velvica:
    • These guys provide a billing and management platform to cloud service providers (especially small to medium scale cloud service providers) so they don’t have to build such capabilities ground up on their own. If you are a CSP, all that is required is for you to have VMware vCloud Director instance and you can simply point Velvica portal at the vCD to present a service serviceable public Cloud portal to customers. Can also be used within an organisation internally if you have a private cloud. Again, I hadn’t come across this before and I thought their offering helps many small CSP’s to get to market quicker while providing a good platform for corporate & enterprise customers to introduce utility computing internally without much initial delay or cost.

During the rest of the Day 1, I attended few more breakout sessions such as the vCenter 6.0 HA deepdive. While this was not as good a session as I had expected, I did learn few little things such as prior to vSphere 6 u1, vCenter database NOT being officially supported on SQL AAG (Always on Availability Groups), Platform Service Controller being clusterable without a load balancer (require manual failover tasks of course) as well as a tech preview of the native HA capability going to be available for vCenter (no need for vCenter heartbeat or any 3rd party products anymore) that looked pretty cool.

On day 2, there was another general session on the morning where VMware discussed the strategy and new announcement on EYUC, security & SDN…etc. with various BU leaders on stage. VMware CEO Pat Gelsinger also came on stage to discuss future direction of the organization (though I suspect most of this may be influenced by Dell if they remain a part of Dell??).

Following on from the general session on day 2, I attended a breakout session about NSX Micro Segmentation automation deep dive which was presented by 2 VMware Professional Services team members from US. This was really cool as they showed a live demo of how to create custom vRO workflows to perform NSX operations and how they can be used to automate NSX operations. While they didn’t mention this, it should be noted that these workflows can naturally be accessed from vRealize Automation where performing routine IT tasks can now be made available through a pre-configured service blueprint that users (IT staff themselves) can consume via the vRA self serviceable portal.

While I had few other breakout sessions booked for afterwards, unfortunately I was not able to attend these due to a last minute meeting I had to attend onsite at VMworld, with a customer to discuss a specific requirement they have.

I will be spending the rest of the afternoon looking at more vendor stands at Solutions Exchange until the VMware official party begins where im planning to catch up with few clients as well as some good friends…

Will provide an update if I come across any other interesting vendors from the Solutions Exchange in tomorrow’s summary





VMworld Europe 2015 – Partner Day (PEX)

Quick post about the VMworld Europe day 1 (PEX day)….!! Was meaning to get this post out yesterday but there are too many distractions when you attend VMworld, let me tell ya….! 🙂

I arrived in Barcelona on Sunday and had already collected the access pass on Sunday evening itself. As such, I arrived at the venue on the Partner day on Monday around 9am and the venue was fairly busy with various VMware employees and partners.

As for my schedule for the day, I attended a VSAN deepdive session in the morning, presented by non other than Mr VSAN himself (Simon Todd @ VMware) which was fairly good. To be honest, most of the content was the same as the session he presented few weeks ago at VMware SDDC boot camp in London which I also attended. Some of the interesting points covered include

  • Oracle RAC / Exchange DAG / SQL Always on Availability Groups are not supported on VSAN with the latest version (6.1)
  • Always use pass through rather than RAID 0 on VSAN ready nodes as this gives full visibility of the disk characteristics such as SMART and removal of disks from disk groups causing less downtime with passthrough rather than RAID which makes sense.
  • Paying attention to SAS expander cards and lane allocation if you do custom node builds for VSAN nodes (rather than using pew-configured VSAN ready nodes). For example, a 12g SAS expander card can only access 8 PCI lanes where in an extreme case, can be saturated so its better to have 2 x SAS expander cards to share the workload of 8 channels each
  • Keep SATA to SSD ratio small in disk groups where possible to distribute the workload and benefit from maximum aggregate IOPS performance (from the SSD layer)
  • Stretched VSAN (possible with VSAN 6.1) features and some pre-reqs such as less than 5ms latency requirements over 10/20/40gbps links between sites, multicast requirements, and the 500ms latency requirement between main site and the offsite witness.

Following on from this session, I attended the SDDC Assess, Design & Deploy session presented by Gary Blake (Senior Solutions Architect). That was all about what his team doing to help standardise the deployment design & deployment process of the Software Defined Data Center components. I did find out about something really interesting during this session about VMware Validated Designs (VVD). VVD is something VMware are planning to come out with which would be kind of similar to CVD (Cisco Validated Design Document if you are familiar with FlexPod). A VVD will literally provide all the information required for a customer / partner / anyone to Design & Implement a VMware validated Software Defined Data Center using the SDDC product portfolio. This has been long overdue in my view and as a Vmware partner and a long time customer, would really welcome this. No full VVD’s are yet released to the public yet, but you can join the community page to be kept up to date. Refer to the following 3 links

I then attended a separate, offsite roundtable discussion at a nearby hotel with a key number of NSX business Unit leaders to have an open chat about everything NSX. That was really good as they shared some key NSX related information and also discussed some interesting points. Few of the key ones are listed below.

  • 700+ production customers being on board so far with NSX
  • Some really large customers running their production workload on NSX (a major sportswear manufacturer running their entire public facing web systems on NSX)
  • East-West traffic security requirements driving lots of NSX sales opportunities, specifically with VDI.
  • Additional, more focused NSX training would soon be available such as design and deployment, Troubleshooting…etc
  • It was also mentioned that customers can acquire NSX with limited features for a cheaper price (restricted EULA) if you only need reduced capabilities (for example, if you only need edge gateway services). I’m not sure on how to order these though and would suggest speaking to your VMware account manager in the first instance.
  • Also discussed the potential new pricing options (nothing set in place yet..!!) in order to make NSX more affordable for small to medium size customers. Price is a clear issue for many small customers when it comes to NSX and if they do something to make it more affordable to smaller customers, that would no doubt be really well received. (This was an idea the attendees put forward and NSBU was happy to acknowledge & looking in to doing something about it)
  • Also discussed some roadmap information such as potential evolution of NSX in to providing firewall & security features out on public clouds as well as the private clouds.

Overall, the NSX roundtable discussions were really positive and it finally seems like the NSBU is slowly releasing the tight grip they had around the NSX release and be willing to engage more with the channel to help promote the product rather than working with only a handful of specialist partners. Also, it was really encouraging to hear about its adoption status so far as I’ve always been an early advocate of NSX due to the potential I saw during early releases. So go NSX….!!!

Overall, I thought the PEX day was ok. Nothing to get too excited about in terms of the breakout sessions…etc, with the highlight being the roundtable with the NSBU staff.

Following on from the discussion with the NSBU, I left the venue to go back to the hotel to meet up with few colleagues of mine and we then headed off to a nice restaurant on the Barcelona beach front called Shoko (http://shoko.biz/) to get some dinner & plan the rest of the week… This is the 2nd time we’ve hit this restaurant and I’d highly recommend anyone to go check it out if you are in town.

Unfortunately, I cannot quite recollect much about what happened after that point… 🙂

Post about the official (customer facing) opening day of the VMworld event is to follow….!!



VMworld 2015 Europe – Plans & My Session Schedule


I’ve been fortunate enough to attend the VMworld Europe event for the past 3 years running, and as it turned out, I will be attending this years event too in lovely & lively Barcelona. So I thought I’d do a quick blog post to share my plans for this years VMworld Europe, in case anyone’s interested in knowing or wanting to meet up while I’m there, you know where I am when. Also, I will list my session schedule along with why I thought I should attend each session, just in case anyone’s interested.

As I work for a VMware platinum partner, I’m planning to attend the Partner Exchange Day (PEX) on Monday which is not open to general public. Therefore, I will be travelling on Sunday afternoon from Manchester to Barcelona with a view to collecting my registration pass on Sunday evening itself (they usually have the registration desk open on Sunday till about 6-7pm if I remember correct) which will hopefully save me from having to queue up on Monday morning. I will be staying at hotel Torre Catalunya throughout the week with few other colleagues of mine and some customers that will be joining us there.

  • On Monday (PEX day), I have the following sessions booked to attend.
    • 9am – 10am:  PAR6413 – VMware Virtual SAN Architecture Deep Dive for Partners
      • VSAN has now come of age (its a version 3 product now which means its quite mature) and has turned out to be a really nice, complementary solution to vSphere that work well with most if not all storage use cases. The customer adoption of VSAN for hosting production workload has been beyond belief. I am fairly conversant about VSAN and its technical and business benefits as well as its sizing, architectural side of set up and the implementation details. But by attending this session, I’m aiming to learn a bit more about the All flash VSAN configuration, the new generation snapshot capability (available soon I hear) and the performance enhancements introduced in the most recent 6.1 release.
    • 11am – 12:30 am: Virtual SAN Partner Advisory Roundtable
      • This was an invited event for VMware partners to interactively discuss & debate and share experience about what worked well and where the partners need more support from VMware to successfully implement VSAN solutions for customers. I’m hoping to meet a number of EMEA and Global product management staff responsible for VSAN as well as key people from the Storage business unit within VMware during the event. I already have a number of questions & requests on behalf of my customers to the VMware VSAN team and am looking forward to attending this event.
    • 12am – 1pm: PAR6411 PSE: SDDC Assess, Design and Deploy 2.0 – Whats New?
      • Ok, I know the starting time is clashing with the finishing time of the previous session my I’m planning to bail early from the previous one to attend this one on time. This is a partner only session with the VMware Professional Services Engineering team to discuss the professional services delivery kit these guys put together, especially in light of the vSphere 6 and other related new versions.
    • 2:30pm – 15:30pm: PAR6090 60 Desktops in 60 Minutes: How to Deploy Horizon View with vGPU for a Quick POC
      • I’m not heavy in to VDI side of things, but I was genuinely interested in the vSphere 6 introduced vGPU feature as I’ve seen the demo’s of this in the last years VMworld and it looked totally awesome for graphics performance for VDI. So, naturally wanted to find out more about the deployment tricks and what it takes to do quick POC as no doubt I’d have to be doing this few times for my customers in the future (demand for VDI finally seems to be increasing.
    • 4pm – 5:30pm: Executive Roundtable with NSBU Executives: Martin Casado and Milin Desai
      • Ok, this is personally my most eagerly awaited session for the day. Again, an invited partner event to discuss the NSX and its roadmap with none other than the man who invented it himself (Martin Casado) and also Milin Desai from the VMwae NSX team. This could be epic…!! (The event is NOT taking place at VMworld venue but only at a separate hotel in Barcelona). Hopefully I will be able to get more of an understanding of where NSX is heading as a solution and some roadmap info which would be invaluable for my customers.
    • 5:30pm – 7pm: Gym.
      • Yes, it may be VMworld, but keeping your calorie burn / fitness is equally important (says the man hopefully 🙂 )
    • Evening: Meeting with other Insight (my employer) colleagues and customers to plan the rest of the week and perhaps few beers & food at the hotel.
      • Do come and say hello if you are there….. 🙂


  • On Tuesday (1st day of the general event open for all), I’ve got the following sessions planned
    • 10am – 11am: MGT5956 The New vRealize Converged Blueprints: Driving Automation and DevOps
      • Names says it all right? vRA has been of really keen interest to me and planning to find out more about the latest version and NSX integration as well as Application service integration in to a  single blueprint here from the horse’s mouth. if you are in to Automation and Orchestration, this should be a really important session to attend
    • 11:30am – 3pm: Solutions Exchange browsing and talking to as many vendors as possible about the their solution offerings.
      • Often, this is something that many VMworld attendees don’t prepare for, especially 1st timers as they’d inundate their diary with back to back breakout sessions (which after all, will be available as videos / presentation slides post VMworld). Attending breakout sessions are important yes, but I’d say its far more important to look at the solutions exchange and see what vendors are there and what they have to offer. In my previous attendances, I’ve come away with some really unique vendors with some really cool, unique and useful technology offerings to complement VMware tech that you can position to customers when the come to you with requirements that are not mainstream. Trust me on this one….!!
    • 2:30pm – 3:30pm: INF4945 vCenter Server 6 High Availability
      • While I have a decent understanding of the vCenter / vCSA high availability options available with vSphere v6.0, finding out more should not harm.
    • 4pm – 5pm: SDDC5440 VMware Validated Designs – An Architecture for the SDDC
      • I work with many SDDC offerings for my customers and its aleays good to get more information from VMware about how they’d recommend you design and deploy their SDDC software together such as vSphere, NSX, vRA and vROPS.
    • 5:30pm – 7pm: Gym.
      • Yes, it may be VMworld, but keeping your calorie burn / fitness is equally important (says the man hopefully 🙂 )
    • 8pm – late: Veeam party / VMware UK&I reception party
      • Unsure which one I’ll end up joining but probably try both. Veeam party was a knock out last year and defo worth attending, and I say that not because of the free drinks, but because of the networking element with like minded peers. Its awfully useful to meet with other like minded people and talk tech (most of the time)




  • Friday morning: Travel back home.


There you have it. I will aim to be tweeting (@s_chan_ek) and blogging while I’m there too subject to time constraints…etc. but please do come and say hello if you are interested in meeting up to discuss something or simply to have a chat (even if its to tell me my blog is rubbish….:-) )

Enjoy VMworld Europe 2015….!!



IP Expo Europe 2015 – My Take….!!

Ok, its been a while since my last blog post…. Been inundated with many things…. Anyway, thought I’d re-awaken the blogging monster by mentioning few words on what I thought of the opening day of the IP EXPO Europe 2015 event that I attended today (for the first time I might add).

IP EXPO Europe event is a 2 day, free to attend IT exhibition event and the 2015 edition started today in ExCel, London. According to their website (http://www.ipexpo.co.uk/), they claim that it is the “Europe’s Number ONE Enterprise IT event”. I’ve always wanted to attend this event in the previous years but due to various other commitments, I couldn’t. I normally attend key IT events like these throughout the year, including VMworld, NetApp Insight, Cisco Live and of course, the great Insight Technology Show that is hosted by my employer Insight annually (link to Manchester version which is due soon is here), mainly to keep a tab on new tech from various vendors as well as to network with people in the industry. I have found some little, hidden gems in terms of technologies that I never knew existed, or had heard about but not really had a chance to look in to in detail, during my previous attendances. IP EXPO was something missing off my list, until today that is as I made an effort to go see what this was about.

IP EXPO Europe this year was held in the ExCel in London (Royal Victoria Dock), and all the information about the event can be found on their website. There were different tracks you can follow (Cloud & Infrastructure, Cyber Security, Datacenter, Data Analytics, DevOps & Unified Communications) and while there were some heavyweight key note speakers around to present sessions (which I didn’t attend by the way), there were also the typical exhibition booths from various vendors covering these tracks. I spent all my time there going to each and every booth to find out about what their technology offerings were to see if there were anything that would stand out (to me) based on what they can offer or meet complex business requirements of today’s world.


Once you walk in to the exhibition hall, you are faced with a fairly few vendors and their exhibition booths right in front of you (there were also some resellers and other 3rd parties too). The full list of exhibitors can be found on their website here. Though some of the big ones were there, this being labelled as an Enterprise IT event (number one at that supposedly), number of key enterprise tech vendors that you’d normally expect to see in other neutral IT events like these (for example, the Insight Technology Show), were not there unfortunately. I normally focus on the Server, Storage & Virtualisation technologies in my day job and key players in that arena like Cisco, Microsoft, VMware, Oracle, EMC…etc. were not present (at least they didn’t have their own stands which they usually do). If I may say so (and this is entirely my view btw), most if not all vendors that were present there were small to medium size tech vendors (with very few exceptions) rather than big corporate stalwarts that tend to shape technology trends. To me personally, that was a bit of a disappointment.

My Key Interests

I literally went to every exhibition stand to have a look at their tech offering and spoken to most to see if there was any new / emerging  technology or a vendor with a technology of real interest / uniqueness that I should take notes of.  Usually, when I attend VMware’s VMworld or Cisco Live or even Insight Technology show every year, I come away with a number of key, previously unknown (to me) vendors that offer some key technology that can address those weird, uncommon, yet critical business or IT requirements. Unfortunately, there weren’t many of them at IP EXPO Europe that I came across today (I have to say, I was little biased towards Server, Storage and Virtualisation technologies rather than security or unified comms in my assessment).

The few interesting ones, that were new to me and therefore would be worth an individual mention, are listed below FYI.

  • Kroll Ontrack
    • URL: http://www.krollontrack.co.uk/
    • What they do: Kroll is a market leader in data recovery and seem to have some really cool capability when it comes to recovering data, be that typical consumer data from a failed or a broken hard disk to enterprise data recovery from an enterprise SAN storage like NetApp or even VMware VSAN. I was really keen on their experience in being able to recover data from a failed VMware VSAN cluster which apparently was something even VMware VSAN support team were not able to do. You can read all about it here. So much so, that Kroll Ontrack is now a VMware partner that they back off extreme data recovery work to within VMware support (This was something I was lead to believe by their Systems Engineer, unsure how true it is). Also, they claim to be able to do metadata repairs on failed NetApp disks and you can read about it here which I though was really cool. It goes without saying that you need to architect your enterprise storage systems properly such that in the case of a planned or unplanned failure, you have the ability to restore data from backup copies but in the unlikely event of  needing to recover data from source, Kroll Ontrack seem to come real handy.
  • Delphix
    • URL: http://www.delphix.com/
    • What they do:  Provide data as a service through using software to create virtual, zero space copies of your databases, applications and file systems. This is really cool as they’d take a copy of your database for example (most major database types are supported such as Microsoft SQL server, Oracle, My SQL…etc.) on to the Delphix engine and then present virtual copies of those database files to multiple clients (think Test & Dev copies of the same data) without consuming additional storage underneath (if you are familiar with NetApp FlexClone technology, this is kind of the same). Not only that but such copy provisioning can be self orchestrated through a self serviceable portal and could be a great usecase for Test & Development environments (I know a number of customers on top of my head that would benefit from this straight away). My first question to their System Engineer (yeah I don’t waste time talking to sales people at these events but only to technical people who says it as is) was, well, can you not use this for example to create VDI clones using a gold image (which would work better than linked clones for example) and apparently there are talks underway between the 2 companies already. Again, really interesting technology that I’d keep an eye on which can help solve many corporate IT challenges. More details on  their web site.

Other than these 2, there were few others such as Druva, a converged data protection technology for mobile and endpoint devices, A10 Networks that offers Server Load Balancing, Global server load balancing, Application Delivry Controllers, DDoS Mitigation and Network management solutions and Bomgar that provide Remote Support Software and Privileged Access Management solutions caught my eye too.

Summary – My thoughts….!

IP EXPO Europe is not the same standard as VMworld or Cisco Live or dare I say, even the Insight Technology Show (might be biased on the last one) as the lack of major vendors and their stands was little disappointing. However attending the even could still be of value to those average IT consumer who wants to get a feel for some new and some medium to large scale technology offerings available, especially if you are not a seasoned IT professional who generally tend to stay on top of most things relating to IT. The DevOps track in particular was pretty good I thought as they had most key DevOps vendors out such as Puppet, Chef……etc. available and those DevOps tech are very relevant to the current IT industry where automation and orchestration is in high demand right now. So you’d benefit from visiting those stands and talking to their SE’s available to talk to. It is also a free to attend event (similar to Insight Technology Show) which must be mentioned here as not many folks can afford to send their IT staff to attend expensive paid for events such as VMworld / Microsoft Teched / Cisco Live… It would also be good if you are focused on IT security as the whole event had far more security vendors (or someone with a technology relating to IT security in some shape or form) than any other vendor type, including some of the big players such as F5, Palo Alto & BAE. (I guess the Edward Snowden factor is still playing out on the full swing in peoples minds….!)

Would I come back next year to attend? Well, I’m not sure…. I personally didn’t think I learned or achieved anywhere near what I expected (based on what I’m used to getting, having attended other similar events). Guess I’ll have to see which vendors make up the exhibitors list next year and then decide. Based on todays attendance, I definitely do not agree with their self proclaimed status of this being the “Europe’s number 1 IT event”.

If you do attend this years IP EXPO Europe and found anything interesting, please feel free to post a comment…



VMware VSAN Assessment Tool – VMware Infrastructure Planner (VIP)

VMware has released an assessment tool called VIP – VMware Infrastructure Planner which is an appliance that a valid VMware partner can download and deploy at a customer environment in order to assess the suitability of VSAN based on the actual data collected from the infrastructure. This post primarily looks at using this VIP appliance to assess the suitability of VSAN. This assessment is  the pre-cursor to a VSAN sizing during which, the sizing data are automatically collected and analysed by VMware and a final recommendations will be made as to the suitability of VSAN and the recommended hardware configuration details be used for building the VSAN. Note that the same appliance can be used to assess the suitability of the vCloud suite components in that environment and I will also publish separate post on how to do that at a later date. The process of using the appliance to do a VSAN assessment involves the following high level steps.

  1. A VMware employee or a valid channel partner will have access to the VIP portal (https://vip.vmware.com) – Note that the partner would need to sign up to an account free of charge. 2. Create Assessment
  2. Once logged in, the partner can create an assessment for a specific customer by providing some basic details (similar to the VMware capacity planner that was heavily used by VMware partners during early virtualisation days to assess virtualisation and consolidation use cases).
  3. Once the assessment is created, a unique ID for the assessment is generated on the portal.   3. Assessment Settings
  4. VMware partner then adds the customer details and the customer gets an email sent with a link to login to the portal and download an .ova appliance (partner can download it also) 4. Customer Email 5. Customer Logs in 6. Download the collector appliance
  5. The customer or the partner then deploys the appliance in the customer’s vSphere cluster (Note that the appliance can be deployed on any vCenter server / Cluster regardless of the one being monitored, as long as the appliance will have the networking access to the cluster being monitored, including the ESXi servers)
  6. Once the appliance is deployed, you can access the appliance using the https://<IP of the appliance> and do a simple configuration.
    1. Enter the unique assessment key generated (above). This will tie in the deployed appliance to the assessment ID online so that the monitoring and analysis data will be forwarded on to the online portal under that assessment ID. You get to determine how long the data collection should take place for.
    2. It then prompts you to select either a VM migration to vCenter assessment or a full cluster migration assessment to vCenter (I’ve used the cluster migration for the below)
    3. Provide the vCenter address (FQDN) that the collector needs to be registered against to perform the assessment of the VM’s. This could also be the same vCenter that manages the cluster where the appliance deployed or an external vCenter instance. A valid account need to be provided to access the vCenter instance.
    4. During the vCenter registration process, a VIB file would be deployed to all attached ESXi hosts that will enable the monitoring capability. (no downtime required) – Note the below
      1. HTTP/S client ports (80,443) need to be open on the ESXi servers to be able to download the VIB.
      2. According to the deployment notes, ” Histogram analysis and possibly tracefile analysis will be run on these VMs, which will degrade performance by about 5 to 10%, and the hosts will become momentarily unreachable, so be sure not to select VMs that are running very performance sensitive or real-time tasks
    5. Once complete, you’ll be presented with a confirmation window similar to the below which lists out all the VM’s in the cluster7. Appliance config
  7. Data collection from the VM’s in the cluster & forwarding on to the online portal will now begin. Once the data collection is complete, an email notification will be sent. Note that all automated email notifications throughout the process will be sent to both the customer’s named contact as well as the VMware partner contact who set the assessment up within the portal. Given below is a screenshot of the portal once the data collection is completed.
    1. As you can see, its automatically analysed the data and recommended the use of a Hybrid VSAN with 400MB of SSD cache size. (This is based on my lab so the cache size is relatively smaller than what would be recommended in a production environment.           8 Data Collection complete 9 Collection report 2
  8. Once the data collection is complete, data can be directly fed to the VSAN sizer (https://vsantco.vmware.com/vsan/SI/SIEV) to size a potential VSAN solution up which is handy. All you need to do is to click on the button at the bottom that says “Go to VSAN TCO and Sizing Calculator” which will take you to the sizing portal with the data being automatically prefilled for the sizer. 10. Sizing calculator 12. Sizing results
  9.  If you then want to do a TCO comparison to using VSAN Vs traditional HW based SAN, you can go ahead by clicking on the TCO inputs button and providing financial information.    14 1315
  10. Sizing calculator then produces a simple TCO report outlining the cost of VSAN Vs traditional SAN (HW based)   16 17 18 19
  11. I should mention that the above screenshots were based on the default TCO assumptions that include default indicative pricing for various HW SAN’s. I’d encourage that you talk to your reseller / storage vendor to have an independent assessment done using their tools and then use the cost they provide for their SAN solution to update the VSAN OPEX assumptions (as shown below) to get an accurate comparison here in these graphs.             20

Pretty cool ain’t it?