Time of the Hybrid Cloud?

Hybrid Cloud

A little blog on something slightly less technical but equally important today. Not a marketing piece but just my thoughts on something I came across that I thought would be worth writing something about.

Background

I came across an interesting article this morning based on a Gartner research on last years global IT spend where it was revealed that global IT spent was down by about $216 Billion during 2015. However during the same year data center IT spend was up by 1.8% and is forecasted to go up to 3% within 2016. Everyone from IT vendors to resellers to every IT sales person you come across these days, on Internet blogs / news / LinkedIn or out in the field seem to believe (and make you believe) that the customer owned data center is dead for good and everything is or should be moving to the cloud (Public cloud that is). If all that is true, it made me wonder how the data center spend went up when in fact that should have gone down? One might think this data center spend itself was possibly fuelled by the growth in the public cloud infrastructure expansion due to increased demand on Public cloud platforms like Microsoft Azure and Amazon AWS. Make total sense right? Perhaps in the outset. But upon closer inspection, there’s a slightly complicated story, the way I see it.

 

Part 1 – Contribution from the Public cloud

Public cloud platforms like AWS are growing fast and aggressively and there’s no denying that. They address a need in the industry to be able to use a global, shared platform that can scale infinitely on demand and due to the sheer economy of scale these shared platform providers have, customers benefit from cheaper IT costs, especially compared to having to spec up a data center for your occasional peak requirements (that may only be hit once a month) and having to pay for it all upfront regardless of the actual utilisation can be an expensive exercise for many. With a Public cloud platform, the up front cost is cheaper and you pay per usage which makes it an attractive platform for many. Sure there are more benefits of using a public cloud platform than just the cost factor, but essentially “the cost” has always been the most key underpinning driver for enterprises to adopt public cloud since its inception. Most new start ups (Netflix’s of the world) and even some established enterprise customers who don’t have the baggage of legacy apps, (By legacy apps, I’m referring to client-server type of applications typically run on Microsoft Windows platform), are by default electing to predominantly use a cheaper Public cloud platform like AWS to locate their business application stack without owning their own data center kit. This will continue to be the case for those customers and therefore will continue to drive the expansion of Public cloud platforms like AWS. And I’m sure a significant portion of the growth of the data center spend in 2015 would have come from the increase of these pure Public cloud usage causing the cloud providers to buy yet more data center hardware.

 

Part 2 – Contribution from the “Other” cloud

The point is however, not all the data center spend increment within 2015 would have come from just Public cloud platforms like AWS or Azure buying extra kit for their data centres. When you look at numbers from traditional hardware vendors, HP’s numbers appear to be up by around 25% for the year and others such as Dell, Cisco, EMC also appear to have grown their sales in 2015 which appear to have contributed towards this increased data center spend.  It is no secret that none of these public cloud platforms use traditional data center hardware vendors kit in their Public cloud data centres.  They often use commodity hardware or even build servers & networking equipment themselves (lot cheaper). So  where would the increased sales for these vendors have come from? My guess is that they likely have come from most enterprise customers deploying Hybrid Cloud solutions that involves customers own hardware being deployed in their own  / co-location / off prem / hosted data centres (customer still own their kit) along with using an enterprise friendly Public cloud platform (mostly Microsoft Azure or VMware vCloud Air) acting as just another segment of their overall data center strategy. If you consider most of the established enterprise customers, the chances are that they have lots of legacy applications that are not always cloud friendly. By legacy applications, I mean typical WINTEL applications that typically conform to the client server architecture. These apps would have started life in the enterprise since Windows NT / 2000 days and have grown with their business over time. These applications are typically not cloud friendly (industry buzz word is “Cloud Native”) and often moving these as is on to a Public cloud platform like AWS or Azure is commercially or technically not feasible for most enterprises. (I’ve been working in the industry since Windows 2000 days and I can assure you that these type of apps still make up a significant number out there). And this “baggage” often prevents many enterprises from purely using just Public cloud (sure there are other things like compliance that gets in the way too of Public cloud but over time, Public cloud system will naturally begin to cater properly for compliance requirements…etc. so these obstacles would be short lived). While a small number of those enterprises will have the engineering budget and the resources necessary to re-design and re-develop these legacy app stacks to be a more modern & cloud native stack, most of them will not have that luxury. Often such redevelopment work are expensive and most importantly, time consuming and disruptive.

So, for most of these customers, the immediate tactical solution is to resort to a Hybrid cloud solution where the legacy “baggage” app stack live on a legacy data center and all newly developed apps will likely be developed as cloud native (designed and developed from ground up) on an enterprise friendly Public cloud system such as Microsoft Azure or VMware vCloud Air. An overarching IT operations management platform (industry buzz word “Cloud Management Platform”) will then manage both the customer owned (private) portion and the Public portion of the Hybrid cloud solution seamlessly (with caveats of course). I think this is what has been happening in 2015 and this may also explain the growth of legacy hardware vendor sales at the same time. Since I work for a fairly large global reseller, I’ve witnessed this increased hardware sales first hand from the traditional data center hardware vendor partners (HP, Cisco…etc.) through our business too which adds up. I believe this adoption of Hybrid cloud solutions will continue through out 2016 and possibly beyond for a good while, at least until such time that all legacy apps are eventually all phased out but that could be a long while away.

 

Summary

So there you have it. In my view, Public cloud will continue to grow but if you think that it will replace customer owned data center kit anytime soon, that’s probably unlikely. At least 2015 has proved that both Public cloud and Private cloud platforms (through the guise of Hybrid cloud) have grown together and my thoughts are that this will continue to be the case for a good while. Who knows, I may well be proven wrong and within 6 months, AWZ & Azure & Google Public clouds will devour all private cloud platforms and everybody would be happy on just Public cloud :-). But the common sense suggest otherwise. I can see lot more Hybrid cloud deployments in the immediate future (at least few years) using mainly Microsoft Azure and VMware vCloud Air platforms.  Based on technologies available today, these 2 in my view stand out as probably the best suited Public cloud platforms with a strong Hybrid cloud compatibility given their already popular presence in the enterprise data center (for hosting legacy apps efficiently) as well as each having a good overarching cloud management platform that customers can use to manage their Hybrid Cloud environments with.

 

Thoughts and comments are welcome….!!

 

VMware vRealize Automation Part 8 – Adding a VMware vCloud Air Endpoint & Publishing a Cloud VM Blueprint

 

So now we have a fully functioning vRA 6.2.1 deployment, fully integrated to the on-premise vCenter instance, the vRO appliance for workflow orchestration and NSx for network orchestration (via vRO). Now lets look at how to set up a cloud endpoint so that you (or the users) can request VM’s to be provisioned on the cloud rather than their local vSphere cluster. We are looking at adding VMware’s own vCloud Air platform in this article (if I managed to gain access to an Amazon AWS instance, I’d publish a future post on that too as each cloud platform integration is different to one another.

VMware vCloud Air (formally known as vCHS) is VMware’s own managed and operated cloud platform, that runs on the same vSphere technology as your on-premise environment. They have a vCloud Director instance in front, which manages the multi tenancy aspect of a collection of vSphere clusters which you can either buy a subscription as an on demand basis (similar to AWS) or monthly / annual subscription basis (with no usage charges which is real handy). vCloud Air has been around a while now and is quite popular given that you don’t have to change the architecture of your on-premise applications or servers (VMs) that they are installed on to move them to the cloud (which is the case with both Amazon and Azure and could be painful and expensive). With vCloud Air, you just move the whole VM as is with the application already deployed on it and it will work fine on vCloud Air platform just like it did on your own vSphere cluster (You also have the option to do a “Stretched deployment” which is a way of  moving the VM to the cloud but establishing a Layer 2 network between your vSphere cluster and vCloud platform over a VPN so no IP’s need changing which is awesome).

Just like AWS, vCloud Air (as well as any other 3rd party cloud provider who runs their cloud platform behind vCloud Director basically) can be integrated to your on-premise vRA instance as an endpoint. Imaging that you have a number of developers who, as a part of an application development cycle, would require multiple copies of your production environment (System Integration Testing, User Acceptance Testing…etc.) can easily be offloaded on to a vCloud Air platform without having to buy expensive kit locally to host multiple copies of your prod environment (we are talking additional SAN, Compute, Hypervisor & Networking costs here). Lets also imaging that they want to be able to use vRA so that they can self provision clones / copies of the production environment using pre-defined blueprints defined & published on the vRA IaaS catalog portal? You can quite easily make this happen and attach a vCloud Air endpoint, create a resource reservation on that endpoint and associate that with the business group that the developers belong to and create vCloud (vApp) type blueprints on vRA so that everytime a developer want to create a copy of that SQL server with 2 x App and 2 x Web servers to test a new application, they go to the vRA catalog, request those be provisioned and the servers will automatically be created on the mapped vCloud Air platform. (You can create a single Multi-Machine blueprint to group all of those individual server blueprints too which we’ll cover later)

Ok, enough of what we can do with vRA and vCloud Air and how cool that is… Lets look at what it takes to integrate the vCloud Air subscription you have to vRA and create and publish a vCloud blueprint & provision a VM on cloud that way.

Given below are the steps involved

  1. Create a vCloud Air (vCloud Director) endpoint
    1. Note: If you can remember what we covered in a previous post here, Infrastructure Admins usually create the endpoints within vRA. So login to the vRA portal using as the infrastructure admin (if you are using the default tenant, the URL is “https://<FQDN of the vRA Appliance>/shell-ui-app”. If you have a tenant specified, it’ll be https://<FQDN of the vRA Appliance>/shell-ui-app/org/<TenantName>”. I’m using a tenant called Tenant1 in my example within vRA)
    2. Go to Infrastructure->Endpoints->Credentials and set up credentials to access the vCloud Air endpoint – this is the same username & password you use to login to the vCloud Air online portal that you should have been given / created during the vCloud Air onboarding process (first thing that happens once you’ve signed up)  01
    3. Go to Infrastructure->Endpoints-> and create a new vApp (vCloud) type endpoint (this is the same as if you were creating an endpoint to a local vCloud Director instance)   02
    4. Once the endpoint is created, hover the mouse over the endpoint name and select the data collection and start the collection. You need to wait for this to complete first.
  2. Create a new Fabric group (Infrastructure Admin)
    1. Go to Infrastructure->Groups->Fabric groups and create a new Fabric Group (or you can use an existing fabric group and map the vCloud Air endpoint to it. 1
  3. Create a reservation for the vCloud Air endpoint (Fabric Admin)
    1. Note: Creating a reservation maps a logical portion of the vCloud Air endpoint to the business groups you have. I’m using an existing business group but if you need to create a new business group, do that first and select that business group during the reservation creation here.
    2. Go to Infrastructure->Reservation and create a new cloud reservation of type vApp (vCloud), as Fabric Admin user, selecting the mapped endpoint and the business group 2.1
    3. Go to the Resources tab and select a memory portion and storage portion to be used for this reservation 2.2Reservation-Resources
    4. Go to the Network tab and select the network you want to map to the reservation. Networks available here depends on the networks you’ve created within your vCloud Air portal. By default, you’ll have 2 networks, the default-isolated (private network) and default-routed (network with external connectivity) – Note here that at some point in the future, VMware will roll out NSX on the vCloud Air platform and once that’s complete, you’d also be able to create the logical networking via the same vRA / vCO blueprint too. This is going to be really cool and I don’t think any other public cloud vendor will have this capability for a while. If you have a network profile with static IP’s configured, select that network profile here which will allocate an IP to the VM from the network profile (which we covered in a previous post of the series). I’m not using a one here. 2.3
  4. Create & Publish vApp Component Blueprint (Tenant Admin)
    1. Note: When creating vCloud Air blueprints, its a 2 step process whereby you need to create a vApp Component blueprint first for each VM and then create a higher level master (group) vApp blueprint which will contain 1 or more of the lower level vApp Component blueprints. This is because on vCD (vCloud Director), every VM is placed inside a vApp so you need to create both through the vRA. But when you ultimately create the service & publish it with entitlements to the users, you only need to publish the master vApp blueprint.
    2. Login as tenant admin & go to Infrastructure->Blueprints and create a new cloud blueprint of type vApp Component (vCloud). Provide a name and select the Machine prefix 3.1vApp Component blueprint - info
    3. Go to build information tab and select the cloning action and select the template (you can select from a list of VM templates available within vCloud Air here provided that the data collection from the endpoint has been successful. You have a default set of global templates VMware provides (include CentOS, Ubuntu, Major Windows flavours with SQL) or if you’ve migrated some of your local templates you’ve created, that is specific to your environment (i.e. a Standard server build template from your local vSphere cluster which you can do using vCloud Connector to the vCloud Air portal), they too would appear here. And select the machine resources appropriate. 3.2vApp Component blueprint - Build info
    4. Add any custom properties in the next tab  and click OK.  3.3 vApp Component Blueprint - Properties
    5. Once the vApp Component blueprint is created, don’t forget to publish it (hover the mouse over the blueprint and click publish).  3.4vApp Component blueprint - Publish
  5. Create & Publish a vApp Blueprint (Tenant Admin)
    1. Note: now its the time to create the master vApp blueprint (which, as I explained above, is going to include the component blueprint and which will be published to users)
    2. Create a new cloud blueprint of type vApp (vCloud) and provide the information. Select the same reservation as used for the vApp component blueprint. 4.1vApp blueprint - Build info
    3. Go to the build information tab and select the clone action, and the clone from template should be the same as what you’ve chosen for the component blueprint. Then, nder the components, select the previously created component blueprint to link the child to the parent. 4.2vApp blueprint - Build info
    4. Once completed, don’t forget to publish this one too. 4.3vApp blueprint - Publish
    5. Create a Service to list the blueprint within the catalog (Tenant Admin)
      1. Go to Administration->Catalog Management->Services and add a service and provide all the information required including an icon, owner & support group details. 5.1 Service
      2. Select the service create and click on manage Catalog Items and add the vApp blueprint. Make sure you don’t add the vApp component blueprint here. 5.2 Service Catalog items
    6. Create Entitlements (Tenant Admin)
      1. Go to Entitlements and add a new entitlement and set the status to active. Also select the users / groups (from the business group) that this blueprint is entitled to. 6.1 Entitlements
      2. Go to the Items & Approvals tab and select the created service under entitled services & the same vApp blueprint under the catalog items and all relevant user actions. 6.2 Entitlements - items & Approvals

 

That’s it. You’ve now successfully created a public cloud endpoint within your on-premise vRA, and created and published a VM blueprint that can be used to deploy VM’s on the cloud automatically by your users.

If you now login to the same vRA URL as a valid user who were given the appropriate entitlements above, you’ll see the new blueprint item being available.

7. Catalog items

If you go ahead and request a VM using this cloud blueprint, the request status would be shown under the requests tab 8 Provisioning request on vCloud Air

If you now look directly at the vCloud Air online management portal, you’ll see the VM is being provisioned automatically. Once its complete, you’ll notice the owners name changes.9. Being provisioned in vCloud Air portal automatically 10 Provisoning complete

Once the VM is successfully provisioned in the cloud, the user will also see the status of that within the on-premise vRA portal which they can either access through vRA (console access) or though the vCloud Air online management portal directly (provided that they have a valid user account to login with – note that this account is separate. 11 Item now available on vRA

There you have it. VMware vRA can be a single point of automation and orchestration engine to automate and orchestrate various tasks, machine / VM provisioning on-premise as well as VM provisioning on the cloud. And this shows how vRA can be a key part of what I believe to be the true hybrid cloud infrastructure where you can place workloads on-premise or off premise based on your needs.

If your on-premise vRO is integrated with vCloud Air also, you can create further customisation workflows within vRO and publish them on vRA as an advanced service blueprint too (I will cover that in a future post)

Cheers

Chan

Next: (Optional) – vRA Part 9 – Extensibility – Custom Properties & Build Profiles & Property Dictionary –>