VMware vSphere 6.5 Announced – What’s New?

Cover

VMware has just officially announced the launch of the newest version of vSphere 6.5 today at VMworld 2016 in Barcelona. I’ve been beta testing it for a while and, with the release of this new vSphere version 6.5, there are a number of new enhancements and features that customers would benefit from. I’ve attempted to summarize the key ones below. However note that there may also be many more little tweaks and enhancements that aren’t necessarily been made public by VMware as of yet, that we all will only come to know once its in production use

Out of these, I’ve listed what I think would be the most important ones in blue, below.

vSphere Lifecycle Management

  • Enhanced vCenter Install, Upgrade, Patch: 
    • Streamlined user experience while deploying, upgrading and patching for vCenter Server.              vCSA deploy options
      • Reduced clicks.
      • Client integration plugin NOT required!
      • No browser dependency
      • vCSA ovf deploy target can be ESXi or another vCenter
    • Upgrade option available from Windows version 5.5 & 6.0 to vCSA 6.5
      • vCenter 5.5 and above
      • Deployment type and config preserved
      • Embedded and external SQL and Oracle DB to move to embedded Postgres SQL db within the vCSA appliance
      • Built-in extensions migrated automatically.
      • Migration assistant (windows console application) guides the user with migration process (VMware-Migration-Assistant.exe)
    • Support for CLI template-based vCenter Server lifecycle management.
      • vCSA install via CLI supports install, upgrade form 5.5 or 6.0 to 6.5 and migration from
        • Number of .jason templates are provided and simply edit the templates                                                                                                    Templates
          • ./vcsa-deploy install <template.json>
  • vSphere Update Manager for vCenter Server Appliance:
    • Fully embedded and an integrated vSphere Update Manager experience for vCenter Server Appliance – with no Windows dependencies! (finally)
      • Migrating from Windows vCenter to vCSA 6.5 also enables migration of VUM to vCSA embedded VUM
        • Export baseline from Windows VUm to appliance
        • Support VUM running on the same appliance as vCenter Server Service or external appliance
        • VUM client fully integrated to web-client
  • Enhanced Auto Deploy:
    • New capabilities such as UI support, improved performance and scale, backup and restore of rules for Auto Deploy.
  • Improvements in Host Profiles:
    • Streamlined user experience and host profile management with several new capabilities including DRS integration, parallel host remediation, and improved audit quality compliance results.
  • VMware Tools Lifecycle Management:
    • Simplified and Scalable approach for install and upgrade of VMware Tools, Reboot less upgrade for Linux Tools, OSP upgrades, enhanced version and status reporting via API and UI.
  • Web Client improvements
    • Performance & Usability
    • HTML 5 enablement (Embedded HTML 5 host client as well as the HTML5 Web Client)
  • vCenter Appliance
    • Native HA solution for VCSA (out of the box)                                                                                 HA1            HA2                                     
    • Out of the box backup and restore (file based rather than snapshot based)
    • Enhanced scale and performance (without adding to the underlying host hardware)
    • VUM is now embedded in the VCSA – Yes finally…!!
      • Web client UI for VUM & Auto deploy capability (Auto deploy caching proxies available)
    • Host Profile enhancements
  • Simplified deployment
    • Migration tool from Windows to VCSA (Including VC and VUM migration as a single step migration to achieve upgrade and migrate)
    • CLI interface for VC install, upgrade and migrate – Scripted install and update capability for VC
    • Enhanced UI experience
  • Availability
    • Proactive HA
      • Detect catastrophic health conditions in hosts and notify VI admin, along with remediation steps…etc
      • Ability to vMotion VMs from partially degraded hosts
    • Predictive DRS
      • Evolve DRS to use prediction data from vROPS – Yes..!! Was just a matter of time….!!
      • Perform pre-emptive actions to prepare for CPU/Memory changes
      • Re-balancing of cluster proactively after maintenance events
    • Orchestrated VM Restart using HA:
      • Orchestrated restart allows admins to create dependency chains on VMs or VM groups, allowing for a restart order of these dependencies and multi-tiered applications should an HA restart occur.
      • Not only will Orchestrated restart do this in the order specified by the admin, it can also wait until the previous VM is running and ready before beginning the HA restart of a dependent VM.
    • Fault Tolerance
      • Scalability limits stay the same (4 vCPUs, /64GB vRAM & 8vCPU / 64GB vRAM support still the same)
      •  Improvements in vSphere 6.5
        • Performance improvements and maximum and average response times
          • Reduced max latency from 100ms to 12ms, average of 1ms through FT algo optimisations (i.e. avg ping response down to 1.1ms from 6.6ms in vSphere 6.0, increased TCP request / response throughput, Increased bandwidth)
        • Inter-operate with VSAN (already on 6.0 u1)
          • Persevere storage policies on VM’s in a vsan cluster
        • Interoperate with DRS
          • DRS considers FT requirements in determining optimal initial host placement
        • Multiple NIC aggregation for improved FT network performance
      • Future roadmap discussion topics for FT (no guarantee)
        • Restart FT VM in a different geographical site

vSphere Compute

  • Expanded Support for New Hardware, Architectures and Guest Operating Systems:
    • Expanded support for the latest x86 chipsets, devices and drivers.
    • NVMe enhancements, and several new performance and scale improvements due to the introduction of native driver stack.
  • Guest OS and Customization Support:
    • Continue to offer broad support for guest OS’s, including recent Windows 10 builds, the latest from RHEL 7.x, Ubuntu 16.xx, SUSE 12 SPx and CoreOS 899.x. and Tech Preview of Windows Server 2016.
  • VMware Host Client:
    • HTML5-based UI to manage individual ESX hosts.
    • Supported tasks include creating and updating of VM, host, networking and storage resources, VM console access, and performance graphs and logs to aid in ESX troubleshooting.
    • Negligible host requirements
    • Console access to VM through the WebMKS
    • HTML5 redirection for the vSphere client (C#)
  • Virtual Hardware 13:
    • VMs up to 6TB of memory, and provide UEFI secure boot for guest OS.
  • Increased Scalability and Performance for ESXi and vCenter Server:
    • Continued increases in scale and performance beyond vSphere 6
      • Cluster maximums increased to support up to 64 nodes and 8K VMs.
      • Virtual Machines supported up to 128 vCPUs and 6TB vRAM 
      • Hosts supported up to 480 physical CPUs , 12 TB RAM,
      • 64 TB data stores
      • 1000+ VMs.

vSphere Storage

  • Enhancements to Storage I/O Control:
    • Support for I/O limits, shares and reservations is now fully integrated with Storage Policy-Based Management. SIOC
    • Delivers comprehensive I/O prioritization for virtual machines accessing a shared storage pool.
  • Storage Policy-Based Management Components:
    • Easily create and reuse Storage Policy Components in policies to effectively manage a multitude of data services including encryption, caching, replication, and I/O control. (via SPBM – As yo can see in the screenshot below)     SIOC SPBM
  • Enhancements in NFS 4.1 client:
    • Support for stronger cryptographic algorithms with Kerberos (AES), support for IPV6 with Kerberos and also support for Kerberos integrity check (SEC_KRB5i).
    • PowerCLI support for NFS 4.1 as well in this release.
  • Increased Datastore & Path limit:
    • Number of LUNs supported per host increased to 1024 and number of Paths increased to 4096.
  • Native support for 4k native drives in 512e mode
    • Also means VSAN 6.5 now supports large 4k drives

Management

  • vSphere Web Client enhancements:
    • New Web Client UI features like Custom Attributes, Object Tabs, and Live Refresh are presented alongside other performance and usability improvements.
  • Content Library Improvements:
    • Enhancements to Content Library including ISO mount to a VM directly from Content Library, VM Guest OS customization, simplified library item update capabilities and optimizations in streaming content between vCenter Server.
  • Enhanced DRS:
    • Enhancements to DRS settings with addition of DRS Policies that provides easier way to set advanced options including capabilities like even distribution of virtual machines, consumed vs. active memory, CPU over-commitment.

Security

  • Secure Boot Support for ESXi Host and Guest VM:
    • UEFI secure boot for ESXi and VMs – Protection against image tampering during boot
      • At boot time, we have assurance that ESXi and Guest VM’s are booting the right set of vibs.
      • If the trust is violated, ESXi and the VM’s will not boot and customers can capture the outcome.
  • Enhanced vCenter Events, Alarms and vSphere Logging:
    • Enhancements to vSphere Logging and events to provide granular visibility into current state, changes made, who made the changes and when.
    • Deliver audit-quality logging – Easier auditing and troubleshooting and forensic analysis using logs
  • Other security enhancements
    • VM encryption (Disk + Data) – Can be used to lock down critical VMs
    • Provide file integrity monitoring to meet PCI DSS requirements
    • Encrypted vMotion – Yes finally..!! (provide secure vMotion)

 

There you have it. Some really cool and really innovative features and improvements being delivered by VMware as always. Also note that this is not a major product platform release but only a minor (step) release so the new feature set is relatively minor. Expect bigger and better changes in the next version of vSphere when due out (perhaps next year??)

Slide credit goes to VMware…!!

Cheers

Chan

vSphere Integrated Containers – My thoughts

Capture

During the VMworld 2016, one thing that struck me was the continued focus VMware appears to have on containerisation. I have been looking at containerisation over the last year and half with interest to understand the conept, current capabilities of the available platforms and the practical use for the typical customer. I was also naturally keen on what companies such as VMware and Microsoft have to offer on the same front. VMware annouced number of initiatives such as vSphere Integrated Containers & the Photon platform during VMworld 2015 as their answers to the containerisation and having been looking at their solutions, and also having seen & listened to various speakers / engineers / evagelists during the VMworld 2016 US event, it kind of emphesized the need for me to venture further in to containerisation and especially, VMware’s solutions to containerisation. So Im gonna begin with a quick intro blog post about one of VMware’s approach to containers and what my thoughts are on the solution. I will aim to provide future posts to dig deeper in to th architecture and the deployment apsect of it…etc.

On the front of containers, VMware’s strategy is focused on 2 key solution offerings, vSphere Integrated Containers and Photon platform. While the Photon solution is not yet quite ready for production deployment in my view, its aimed at all greenfield customers who currently do not have legacy vSphere deployments and are strating out afresh. VIC on the other hand is available today & specially aimed at existring vSphere customers hence the main focus of this post.

vSphere Integrated Containers (VIC)

This is the containerisation solution for existing VMware vSphere customers and has been designed for extending vSphere capabilities to the containerised world (or vise versa, depending on how you look at it). It is predominantly aimed at existing vSphere customers who are wanting to jump on to or explore containerised app development for production use.

For those of you who are new to VIC, here’s a quick intro.

In addiiton to typical vSphere components, VIC solution itself consist of 3 main components

  1. VIC Engine – A container run time for vSphere which is deployed on to ESXi. This is an OpenSource development and is available on GitHub. This allows developpers familer wiyth Docker container developments to deploy them alongside existing VMs on an ESXi / vSphere platform and is directly manageable from using the vSphere UI (Web Client). VIC engine is referred to as VCH (Virtual Container Host) and is backed by a vSphere resource pool typically within a cluster. It also containes a copy of the conatainer images which are mapped as vmdk’s on tradiitonal vSphere components such as a VSAN datastore.                    vch-endpoint
  2. Harbour – An enterprise class registry service that stores & distributes Docker images that also include additional security, identity and management for the enterprise. Can be used as a lovcal, on-premise Docker repository so that enterprises using Docker containers won’t have to worry about the security concerns of using the public Docker repository over internet
  3. Admiral – Scalable, lightweight container management playtform used to deploy and manage container based applications

Together with vSphere, VIC provide the customers the ability to deliver a containr based solution in a production environment without having to build a dedicated environment exclusively for the containers.

The main difference between a native container approach such as native Docker on Linux Vs VIC is that,

  • Docker on Linux:  Docker outilises native Linux concept called namespaces. While more inforamtion can be found here, Docker on Linux relies on spewing multiple namespaces / containers within the same Linux server instance so spinning up an applicatiojn service 9that runs inside a container) is super fast (say, compared to powering on a VM with a full blown OS which takes time to load up and then launch the application). Same applies when you stop an application service (just stops the underlying container on th eLinux kerner). Both these operations are executed in memory. Containers
  • VMware Integrated Containers:  The container instance runs in a dedicated, micro OSE (Operating System Environment) called JeVM (Just Enough VM) which consist of a minimalistick version of Linux kernel that is just sufficient to run a container instance.. This kernal is derived from VMware’s project Photon. Photon platform itself is seperate to VIC solution and is supposed to be the second approach VMware are taking for conatiners and Cloud Native Applications, especifically aimed at greenfield deployments where you do not have an existing vSphere stack. in the case of VIC, it is important to remember that the Photon project code used within this micro VM consist of the minimal requirements to run a Docker container instance (Linux kernel and few addiitonal supporting resources giving it a minimum footprint). This Je VM instance is also using the instance clone feature available on vSphere 6.0 to quickly spin up Je VM’s for container instantiation (upon “docker run” for exmaple) so they strats up and closes down at near native speeds to that of a native container on Linux. In return for this fat client approach, customer gets a similar experience when it comes to managing these conatiner environments to that of thatier legacy infrastructure as the existing VMware tools such as vROPS, NSX…etc are all compatible with them (no such compatibility when runniong native Linux containers with Docker)

VIC3

The typical VIC architecture looks like below

VIC2

At the foundation of VIC is vSphere, the same infrastructure that customers have standardized on for all applications from test/dev to business critical apps. VIC adds a graphical plug in to the Web Client for management and monitoring. The Virtual Container Host provides a Docker API endpoint backed by a vSphere resource pool – beyond one VM or dedicated physical host. Instant Clone Template is running Photon OS Linux kernel. Developers interact from standard Docker command line interfaces or API clients. Docker commands are mapped to corresponding vSphere actions by the VCH. A request to run a new image invokes Instant Clone to rapidly fork new “just enough” VMs (Je VM) for execution of the container. Traditional apps can also run alongside containers on the VCH.

As for my thoughts, if you are an existing VMware customer, VIC gives you get the best of both worlds where you can benefit from the existing infrastructure while also benefiting from the agility available through the use of Docker container instances. For example, during the VMworld 2016 US event, VMware’s head of Cloud Native Applications BU, Kit Colbert demoed the integration of vSphere Integrated Containers with vROPS where even containerised apps can have the typical health and performance details shown via vROPS dashboards, much like legacy apps and such capabilities that are not natively available with vanila Docker instances. He also demoed the vRA integration which enables developers to self service containerised application storage placement through a policy change which automatically move the container VM / image content over from one VSAN storage tier to another. I believe such inter-operability and integration with th elegacy toolkit is very important for mass adoption of containerised apps going forward, especially for existing customers with legacy tools and apps. Furthermore, VIC solution also integrate with NSX for extending networking security components in to the container VMs / instance too which is totally cool.

Most importantly, VIC is available free as an opensource download for all VMware customers which makes the case for it even more appealing.

Cheers

Chan

P.S. Slide credit goes to VMware

#Cloud Native #VIC #Photon #VMware #VMworld

VVDs, Project Ice, vRNI & NSX – Summary Of My Breakout Sessions From Day 1 at VMworld 2016 US –

Capture

Quick post to summerise the sessions I’ve attended on day 1 at @VMworld 2016 and few interesting things I’ve noted. First up are the 3 sessions I had planned to attend + the additional session I managed to walk in to.

Breakout Session 1 – Software Defined Networking in VMware validated Designs

  • Session ID: SDDC7578R
  • Presenter: Mike Brown – SDDC Integration Architect (VMware)

This was a quick look at the VMware Validated Designs (VVD) in general and the NSX design elements within the SDDC stack design in the VVD. If you are new to VVD’s and are typically involved in designing any solutions using the VMware software stack, it is genuinely worth reading up on and should try to replicate the same design principles (within your solution design constraints) where possible. The diea being this will enable customers to deploy robust solutions that have been pre-validated by experts at VMware in order to ensure the ighest level of cross solution integrity for maximum availability and agility required for a private cloud deployment. Based on typical VMware PSO best practices, the design guide (Ref architecture doc) list out each design decision applicable to each of the solution components along with the justification for that decision (through an explanation) as well as the implication of that design decision. An example is given below

NSX VVD

I first found out about the VVDs during last VMworld in 2015 and mentioned in my VMworld 2015 blog post here. At the time, despite the annoucement of availability, not much content were actually avaialble as design documents but its now come a long way. The current set of VVD documents discuss every design, planning, deployment and operational aspect of the following VMware products & versions, integrated as a single solution stack based on VMware PSO best practises. It is based on a multi site (2 sites) production solution that customers can replicate in order to build similar private cloud solutions in their environments. These documentation set fill a great big hole that VMware have had for a long time in that, while their product documentation cover the design and deployment detail for individual products, no such documentaiton were available for when integrating multiple products and with VVD’s, they do now. In a way they are similar to CVD documents (Cisco Validated Designs) that have been in use for the likes of FlexPod for VMware…etc.

VVD Products -1

VVD Products -2

VVD’s generally cover the entire solution in the following 4 stages. Note that not all the content are fully available yet but the key design documents (Ref Architecture docs) are available now to download.

  1. Reference Architecture guide
    1. Architecture Overview
    2. Detailed Design
  2. Planning and preperation guide
  3. Deployment Guide
    1. Deployment guide for region A (primary site) is now available
  4. Operation Guide
    1. Monitoring and alerting guide
    2. backup and restore guide
    3. Operation verification guide

If you want to find out more about VVDs, I’d have a look at the following links. Just keep in mind that the current VVD documents are based on a fairly large, no cost barred type of design and for those of you who are looking at much smaller deployments, you will need to exercise caution and common sense to adopt some of the recommended design decisions to be within the appplicable cost constraints (for example, current NSX design include deploying 2 NSX managers, 1 integrated with the management cluster vCenter and the other with the compute cluster vCenter, meaning you need NSX licenses on the management clutser too. This may be an over kill for most as typically, for most deployments, you’d only deploy a single NSX manager integrated to the compute cluster)

As for the Vmworld session itself, the presenter went over all the NSX related design decisions and explained them which was a bit of a waste of time for me as most people would be able to read the document and understand most of those themselves. As a result I decided the leave the session early, but have downloaded the VVD documents in order to read throughly at leisure. 🙂

Breakout Session 2 – vRA, API, Ci Oh My!

  • Session ID: DEVOP7674
  • Presenters

vRA Jenkins Plugin

As I managd to leave the previous session early, I manage to just walk in to this session which had just started next door and both Kris and Ryan were talking about the DevOps best practises with vRealize Automation and vrealize Code Stream. they were focusing on how developpers who are using agile development that want to invoke infrastructure services can use these products and invoke their capabilities through code, rather than through the GUI. One of the key focus areas was the vRA plugin for Jenkins and if you were a DevOps person of a developper, this session content would be great value. if you can gain access to the slides or the session recordings after VMworld (or planning to attend VMworld 2016 Europe), i’d highly encourage you to watch this session.

Breakout Session 3 – vRealize, Secure and extend your data center to the cloud suing NSX: A perspective for service providers and end users

  • Session ID: HBC7830
  • Presenters
    • Thomas Hobika – Director, America’s Service Provider solutions engineering & Field enablement, vCAN, vCloud Proviuder Software business unit (VMware)
    • John White – Vice president of product strategy (Expedient)

Hosted Firewall Failover

This session was about using NSX and other products (i.e. Zerto) to enable push button Disaster Recovery for VMware solutions presented by Thomas, and John was supposed to talk about their involvement in designing this solution.  I didn’t find this session content that relevent to the listed topic to be honest so left failrly early to go to the blogger desks and write up my earlier blog posts from the day which I thought was of better use of my time. If you would like more information on the content covered within this sesstion, I’d look here.

 

Breakout Session 4 – Practical NSX Distributed Firewall Policy Creation

  • Session ID: SEC7568
  • Presenters
    • Ron Fuller – Staff Systems Engineer (VMware)
    • Joseph Luboimirski – Lead virtualisation administrator (University of Michigan)

Fairly useful session focusing about NSX distributed firewall capability and how to effectively create a zero trust security policy on ditributed firewall using vairous tools. Ron was talking about various different options vailablle including manual modelling based on existing firewall rules and why that could potentially be inefficient and would not allow customers to benefit from the versatality available through the NSX platform. He then mentioned other approaches such as analysing traffic through the use of vRealize Network Insight (Arkin solution) that uses automated collection of IPFIX & NetFlow information from thre virtual Distributed Switches to capture traffic and how that capture data could potentialy be exported out and be manipulated to form the basis for the new firewall rules. He also mentioned the use of vRealize Infrastructure Navigator (vIN) to map out process and port utilisation as well as using the Flow monitor capability to capture exisitng communication channels to design the basis of the distributed firewall. The session also covered how to use vRealize Log Insight to capture syslogs as well.

All in all, a good session that was worth attending and I would keep an eye out, especially if you are using / thinking about using NSx for advanced security (using DFW) in your organisation network. vRealize Network Insight really caught my eye as I think the additional monitoring and analytics available through this platform as well as the graphical visualisation of the network activities appear to be truely remarkeble (explains why VMware integrated this to the Cross Cloud Services SaS platform as per this morning’s announcement) and I cannot wait to get my hands on this tool to get to the nitty gritty’s.

If you are considering large or complex deployment of NSX, I would seriously encourage you to explore the additional features and capabilities that this vRNI solution offers, though it’s important to note that it is licensed separately form NSX at present.

vNI         vNI 02

 

Outside of these breakout sessions I attended and the bloggin time in between, I’ve managed to walk around the VM Village to see whats out there and was really interested in the Internet Of Things area where VMware was showcasing their IOT related solutions currently in R&D. VMware are currently actively developing an heterogeneous IOT platform monitoring soluton (internal code name: project Ice). The current version of the project is about partnering up with relevent IOT device vendors to develop a common monitoring platform to monitor and manage the various IOT devices being manufacured by various vendors in various areas. If you have a customer looking at IOT projects, there are opportunities available now within project Ice to sign up with VMware as a beta tester and co-develop and co-test Ice platform to perform monitoring of these devices.

An example of this is what VMware has been doing with Coca Cola to monitor various IOT sensors deployed in drinks vending machines and a demo was available in the booth for eall to see

IOT - Coke

Below is a screenshot of Project Ice monitoring screen that was monitoring the IOT sensors of this vending machine.   IOT -

The solution relies on an Open-Source, vendor neutral SDK called LIOTA (Little IOT Agent) to develop a vendor neutral agent to monitor each IOT sensor / device and relay the information back to the Ice monitoring platform. I would keep and eye out on this as the use cases of such a solution is endless and can be applied on many fronts (Auto mobiles, ships, trucks, Air planes as well as general consumer devices). One can argue that the IOT sensor vendors themselves should be respornsible for developping these mo nitoring agents and platforms but most of these device vendors do not have the knowledge or the resources to build such intelligent back end platforms which is where VMware can fill that gap through a partship.

If you are in to IOT solutions, this is defo a one to keep your eyes on for further developments & product releases. This solution is not publicly available as of yet though having spoken to the product manager (Avanti Kenjalkar), they are expecting a big annoucement within 2 months time which is totally exciting.

Some additional details can be found in the links below

Cheers

Chan

#vRNI #vIN #VVD # DevOps #Push Button DR # Arkin Project Ice # IOT #LIOTA

3. VMware vSphere 6.x – vCenter Server Appliance Deployment

<- Index page – VMware vSphere 6.x Deployment Process

In the previous article, we deployed an external PSC appliance and replaced it’s default root CA cert with a cert from an existing enterprise CA, such that every time VMCA assigns a cert to either vCenter or in turn, ESXi servers, it will have the full enterprise CA certificate chain rather than just vSphere’s cert chain.

Note the below design notes related to the vCenter server deployment illustrated here

  • Similar to PSC, vCenter server will also be deployed using the VMware appliance (VCSA)
  • A single vCenter instance is often sufficient with most requirements given that VMware HA will protect it from hardware failures.

Lets now quickly look at a typical deployment of the vCenter server (appliance)

Note: Deployment of vCenter server using the VCSA is somewhat identical to the earlier illustrated deployement of PSC, in that its the same appliance being deployed, and instead of selecting PSC mode, we are selecting the vCenter Server mode this time.

  1. Download the VMware vCSA appliance ISO from VMware and mount the ISO image on you workstation / jump host and launch the vcsa-setup.html file found on the root of the ISO drive. 1
  2. Now click install.                                                      2
  3. Accept EULA and click next                                                   3. Ack
  4. You can deploy the appliance directly to an ESXi host or deploy through a vCenter. Provide your target server details here with credentials.  2. ESXi
  5. Type the appliance’s VM name & root password for the appliance’s Linux OS. Make a note as you’d need this later. 3. Appliance
  6. Select the appropriate deployment type. We are deploying an external vCenter server here for an external PSC.          4. VCSA01
  7. We are now connecting the vCenter VCSA to the previously deployed PSC instance and the SSO details we configured. 5. Connect to PSC
  8. Select the appropriate vCenter server VCSA appliance size, based on the intended workload of the vCenter.   6. Size
  9. Select the destination datastore to deploy the vCSA appliance on  to   7. Datastore
  10. Now select the vCenter database type. I’m using PostgreSQL here (built-in) as this will now likely be the preferred choice for many enterprise customers as its decent enough to scale up to 10,000 VMs and you don’t have to pay for an SQL server license. Those handful of customers who have an existing Oracle DB server can use Oracle here too. 8. DB
  11. Now provide the IP & DNS details. Ensue you provide a valid NTP server and check that the time syncs properly from this source.
    1. Note here that you need to manualy create the DNS server entry (if you hadn’t done this already) for the VCSA appliance and ensure it resolves the name correctly to the IP used here, before proceeding any further..!9. Config
  12. Verify the settings and proceed to start deploying the appliance. 10. Config
  13. Deployment progress and completion                                                                            12. Progress 13. Completion

SSL Certificate verifications & Updates (Important)!!

We’ve already updated the PSC’s default root certificate with a Enterprise CA signed root certificate in a previous step (Section “Optional – Replace the VMCA root certificate as explained here). So when you add the vCenter appliance to the PSC (which we’ve already performed earlier in this article, all the relevant certificates are supposed to be automatically created and allocated by the VMCA on to the vCenter. However I’ve seen issues with this so just to be on the safe side, I recommend we  follow the rest of the steps involved in the KB article 2111219, under section “Replacing VMCA of the Platform Services Controller with a Subordinate Certificate Authority Certificate” as follows

  1. Replacing the vSphere 6.0 Machine SSL certificate with a VMware Certificate Authority issued certificate (2112279) – On the vCenter Server Appliance
  2. Replacing the vSphere 6.0 Solution User certificates with VMware Certificate Authority issued certificates (2112281) – On the vCenter Server Appliance
  3. If you use Auto Deploy, may want to consider applying the fix mentioned in the KB article 2123631. Otherwise, go the next task
  4. Follow the VMware KB 2109074 and
    1. Follow the listed “Task 0 – Validating the sslTrust Anchors for the PSC and vCenter” – This need to be tested on both the PSC appliance as well as the vCenter appliance as instructed.
    2. If the certificated don’t match, also follow the rest of the tasks as indicated
    3. Validating this here can save you lots of headache down the line…!!

 

That’s pretty much it for the deployment of the VCSA appliance in vCenter mode rather than the PSC mode.

Adding ESXi Servers to the vCenter server

Important note: If you decide to add the ESXi nodes to the vCenter straight away, please we aware of the fact that if the Enterprise subordinate certificate that replaced the VMCA root certificate has been valid for less than 24 hours, you CANNOT add any ESXi hosts as this is by design. See the KB2123386 for more information. In most enterprise deployments where the Enterprise subordinate certificate would have been likely issued few days in advance of the actual PSC & VCSA deployment, this would be a non issue but if you are one of those where you’ve obtained the cert from your Enterprise CA less than 24 hours ago, you need to wait before you can add ESXi servers to the vCenter server.

 

That’s it. Now its the time to configure your vCenter server for AD authentication via the PSC and all other post install config tasks as required.

Cheers

Chan

2. VMware vSphere 6.x – Platform Service Controller Deployment

<- Index page – VMware vSphere 6.x Deployment Process

Following on from the previous article, lets now look at how we go about carrying out a typical enterprise deployment of vSphere 6 and first up is the deployment of PSC. (note that normally, the 1st thing to do is to deploy ESXi but since the ESXi deployment with 6.x is pretty much the same as its 2 previous iterations, I’m going to skip it, assuming that its somewhat mainstream knowledge now)

Given below are the main deployment steps involved in deploying the Platform Service Controller. Note the below notes regarding the PSC design being deployed here.

  • Single, external PSC appliance will be deployed with 2 vCenter server appliances associated with it (topology 2 of the recommended deployment topologies listed here by VMware) as this is likely going to be the most popular deployment model for most people.
  • Lot of people may wonder why no resiliency for PSC here. While PSC can be deployed behind a load balancer for HA, its a bit of an overkill, especially with vSphere 6.0 Update 1 which now supports pointing an existing vCenter Server to another PSC node if its in the same SSO domain. For more information, see this priceless article by William Lam @ VMware which also shows how you can automate this manual repointing if need be.

Lets take a look at the PSC appliance deployment steps

  1. Download the VMware vCSA appliance ISO from VMware and mount the ISO image on you workstation / jump host and launch the vcsa-setup.html file found on the root of the ISO drive. Since this has not specifically been mentioned, it should be noted that the PSC appliance deployment is part of the same vCenter Server Appliance (vCSA) but during the deployment, you specify you only want PSC services deployed) 1
  2. Now click install.                                                   2
  3. Accept EULA and next                                                  3. Ack
  4. You can deploy the appliance directly to an ESXi host or deploy through a vCenter. Provide your target server details here with credentials.  4. ESXi
  5. Type the appliance’s VM name & root password for the appliance’s Linux OS. Make a note as you’d need this later. 5. PSC01
  6. Select the appropriate deployment type. We are using the external PSC here.    6. External psc mode
  7. We are creating a new SSO domain here so provide the required details here. 7. SSO details
  8. Appliance size is not modifiable here as we’ve selected the PSC mode earlier (where the size is same for all).  8. PSC Appliance Size
  9. Select the destination datastore to deploy the PSC appliance on to.  9. PSC Disk Mode
  10. Now provide the IP & DNS details. Ensue you provide a valid NTP server and check that the time syncs properly from this source.
    1. Ensure the DNS entries are manually added to the AD for PSC before proceeding with this step as the PSC deployment may return errors if the FQDN cannot be resolved correctly.  10. PSC Network
  11. Review the deployment settings and click finish to proceed with the appliance deployment. 11. Summary
  12. Deployment progress and completion                                                                            12. Progress 13. Completion
  13. Once complete, ensure you can connect to the PSC web page using the URL http://<PSC FQDN>/websso 14 Verification
  14. You can also connect to the appliance configuration page using the port 5480 as is the case with most VMware products that ships as appliances. The URL is http://<FQDN of the PSC appliance>:5480 and the credentials are root and the password specified during deployment earlier. 15. ILO

Optional – Replace the VMCA root certificate

This is only required if you have an enterprise CA hierarchy already in place within your organisation, such as a Microsoft CA. However, if you are a WINTEL house, I would highly recommend that you deploy a Microsoft Enterprise CA using Windows Server as it is quite useful for many use cases, including automation tasks involved with XaaS platforms. (i.e. Running vRO workflows to create an Active Directory user cannot happen without an LDAPS connection for which the Domain Controllers need to have a valid certificate….etc.). So, if you have an Enterprise CA, you should make the PSC a subordinate certificate authority by replacing its default root cert with a valid cert from the Enterprise CA.
Note that this should ideally happen before deploying the vCenter server appliance, in order to keep the process simple.
  1. To do this, follow the steps listed out in this VMware KB 2111219, under the section “Replacing VMCA of the Platform Services Controller with a Subordinate Certificate Authority Certificate” (To be specific, if your deployment is greenfield and you are following my order of component deployment, which means vCenter server has not yet been deployed, ONLY follow the first 3 steps listed under the “Replacing VMCA of the Platform Service Controller with a subordinate Certificate Authority Certificate” section.  I’ve listed them below FYI.
    1. Creating a Microsoft Certificate Authority Template for SSL certificate creation in vSphere 6.0 (2112009)
    2. Configuring vSphere 6.0 VMware Certificate Authority as a subordinate Certificate Authority (2112016)
    3. Obtaining vSphere certificates from a Microsoft Certificate Authority (2112014)
  2. DO NOT follow the rest of the steps yet (unless you already have a vCenter server attached to the PSC) as they are NOT YET required.

 

PSC configuration

There is not much to configure on PSC at this stage as the SSO configuration and integration with AD will be done at a later stage, once the vCenter Server Appliances have also been deployed with the vCenter Server service.

 

There you have it. Your PSC appliance is now deployed and the default VMCA root certificate is also replaced with a subordinate certificate from your existing enterprise CA, so that your VMware vSphere components that receive a cert from VMCA will have the full organisational cert chain, all the way from the enterprise root CA cert, to the VMCA issued cert.

Next, we’ll look at the VCSA appliance deployment and configuration.

 

1. VMware vSphere 6.x – Deployment Architecture Key Notes

<-Home Page for VMware vSphere 6.x articles

First thing to do in a vSphere 6.x deployment is to understand the new deployment architecture options available on the vSphere 6.0 platform, which is somewhat different from the previous versions of vSphere. The below will highlight key information but is not a complete guide to all the changes..etc. For that I’d advise you to refer to the official vSphere documentation (found here)

Deployment Architecture

The deployment architecture for vSphere 6 is somewhat different from the legacy versions. I’m not going to document all of the architectural deference’s  (Please refer to the VMware product documentation for vSphere 6) but I will mention few of the key ones which I think are important, in a bullet point below.

  • vCenter Server – Consist of 2 key components
    • Platform Service Controller (PSC)
      • PSC include the following components
        • SSO
        • vSphere Licensing Server
        • VMCA – VMware Certificate Authority (a built in SSL certification authority to simply certificate provisioning to all VMware products including vCenter, ESXi, vRealize Automation….etc. The idea is you associate this to your existing enterprise root CA or a subordinate CA such as a Microsoft CA and point all VMware components at this.)
      • PSC can be deployed as an appliance or on a windows machine
    • vCenter Server
      • Appliance (vCSA) – Include the following services
        • vCenter Inventory server
        • PostgreSQL
        • vSphere Web Client
        • vSphere ESXi Dump collector
        • Syslog collector
        • Syslog Service
        • Auto Deploy
      • Windows version is also available.

Note: ESXi remains the same as before without any significant changes to its core architecture or the installation process.

Deployment Options

What’s in red below are the deployment options that I will be using in the subsequent sections to deploy vSphere 6 u1 as they represent the likely choices adopted during most of the enterprise deployments.

  • Platform Services Controller Deployment
    • Option 1 – Embedded with vCenter
      • Only suitable for small deployments
    • Option 2 – External – Dedicated separate deployment of PSC to which external vCenter(s) will connect to
      • Single PSC instance or a clustered PSC deployment consisting of multiple instances is supported
      • 2 options supported here.
        • Deploy an external PSC on Windows
        • Deploy an external PSC using the Linux based appliance (note that this option involves deploying the same vCSA appliance but during deployment, select the PSC mode rather than vCenter)
    • PSC need to be deployed first, followed by vCenter deployment as concurrent deployment of both are NOT supported!
  • vCenter Server Deployment – vCenter Deployment architecture consist of 2 choices
    • Windows deployment
      • Option 1: with a built in Postgre SQL
        • Only supported for a small – medium sized environment (20 hosts or 200VMs)
      • Option 2: with an external database system
        • Only external database system supported is Oracle (no more SQL databases for vCenter)
      • This effectively mean that you are now advised (indirectly, in my view) to always deploy the vCSA version as opposed to the Windows version of vCenter, especially since the feature parity between vCSA and Windows vCenter versions are now bridged
    • vCSA (appliance) deployment
      • Option 1: with a built in Postgre SQL DB
        • Supported for up to 1000 hosts and 10,000 VMs (This I reckon would be the most common deployment model now for vCSA due to the supported scalability and the simplicity)
      • Option 2: with an external database system
        • As with the Windows version, only Oracle is supported as an external DB system

PSC and vCenter deployment topologies

Certificate Concerns

  • VMCA is a complete Certificate Authority for all vSphere and related components where the vSphere related certificate issuing process is automated (happens automatically during adding vCenter servers to PSC & adding ESXi servers to vCenter).
  • For those who already have a Microsoft CA or a similar enterprise CA, the recommendation is to make the VMCA a subordinate CA so that all certificates allocated from VMCA to all vSphere components will have the full certificate chain, all the way from your Microsoft root CA(i.e. Microsoft Root CA cert->Subordinate CA cert->VMCA Root CA cert->Allocated cert, for the vSphere components).
  • In order to achieve this, the following steps need to be followed in the listed order.
    • Install the PSC / Deploy the PSC appliance first
    • Use an existing root / enterprise CA (i.e. Microsoft CA) to generate a subordinate CA certificate for the VMCA and replace the default VMCA root certificate on the PSC.
      • To achieve this, follow the VMware KB articles listed here.
      • Once the certificate replacement is complete on the PSC, do follow the “Task 0” outlined here to ensure that the vSphere service registrations with the VMware lookup service are also update. If not, you’ll have to follow the “Task 1 – 4” to manually update the sslTrust parameter value for the service registration using the ls_update_certs.py script (available on the PSC appliance). Validating this here can save you lots of headache down the line.
    • Now Install vCenter & point at the PSC for SSO (VMCA will automatically allocate appropriate certificates)
    • Add ESXi hosts (VMCA will automatically allocate appropriate certificates)

Key System Requirements

  • ESXi system requirements
    • Physical components
      • Need a minimum of 2 CPU cores per host
      • HCL compatibility (CPU released after sept 2006 only)
      • NX/SD bit enabled in BIOS
      • Intel VT-x enabled
      • SATA disks will be considered remote (meaning, no scratch partition on SATA)
    • Booting
      • Booting from UEFI is supported
      • But no auto deploy or network booting with UEFI
    • Local Storage
      • Disks
        • Recommended for booting from local disk is 5.2GB (for VMFS and the 4GB scratch partition)
        • Supported minimum is 1GB
          • Scratch partition created on another local disk or RAMDISK (/tmp/ramdisk) – Not recommended to be left on ramdisk for performance & memory optimisation
      • USB / SD
        • Installer DOES NOT create scratch on these drives
        • Either creates the scratch partition on another local disk or ramdisk
        • 4GB or larger recommended (though min supported is 1GB)
          • Additional space used for the core dump
        • 16GB or larger is highly recommended
          • Prolongs the flash cell life
  • vCenter Server System Requirements
    • Windows version
      • Must be connected to a domain
      • Hardware
        • PSC – 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM
    • Appliance version
      • Virtual Hardware
        • PSC- 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM

In the next post, we’ll look at the key deployment steps involved.

Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms

Win 2013

It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016). I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.

What’s new (Licensing wise)?

Here are the 2 key licensing changes.

  1. From Windows Server 2016 onwards, licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU
  2. A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server. Note that this is the minimum you can buy. If your server has additional cores, you need to buy additional licenses in packs of 2. So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL)

The most obvious change is the announcement of core based Windows server licensing. Yeah you read it correct…!! Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead. And they don’t stop there…. One might expect if they switch to a CPU core based licensing model, that those with smaller cores per CPU socket (4 or 6) would benefit from it, right? Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, would be the same as Windows server 2012 licensing revenue based on CPU sockets).

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license. In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.

Windows 2016 Server licensing cost comparison

 

The difference in cost to customers

Take the following scenario for example..

You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server. Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

  • List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)
  • Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37

The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows

  • Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00
  • After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60

Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices). But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs…. This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age. And now they are effectively taxing you on the core count too?? That’s possibly not short of a day light robbery scenario for those Microsoft customers.

FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.

  • Dual CPU server with 10 cores per CPU = 125% Increment
  • Dual CPU server with 12 cores per CPU = 150% Increment
  • Dual CPU server with 14 cores per CPU = 175% Increment
  • Dual CPU server with 18 cores per CPU = 225% Increment

Now this is based on todays technology. No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.

My Take

It is pretty obvious what MS is attempting to achieve here. With the ever increasing core count in CPUs, 2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs. We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).

I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.

Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway. Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud). If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.

In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming…. However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.. and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing. So possibly the time is right to bite the bullet and get on with embracing Linux??

So, my advise for customers is as follows

Tactical:

  1. Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc. The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
  2. Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term. So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do. Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have. You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.

Strategic:

  1. Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements. Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too)
  2. Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach. Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills. Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market. And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs. You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.

This article and the views expressed in it are mine alone.

Comments / Thoughts are welcome

Chan

P.S. This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan. I hope enough customer pressure would hopefully cause Microsoft to back off too….

VMware VSAN Assessment Tool – VMware Infrastructure Planner (VIP)

VMware has released an assessment tool called VIP – VMware Infrastructure Planner which is an appliance that a valid VMware partner can download and deploy at a customer environment in order to assess the suitability of VSAN based on the actual data collected from the infrastructure. This post primarily looks at using this VIP appliance to assess the suitability of VSAN. This assessment is  the pre-cursor to a VSAN sizing during which, the sizing data are automatically collected and analysed by VMware and a final recommendations will be made as to the suitability of VSAN and the recommended hardware configuration details be used for building the VSAN. Note that the same appliance can be used to assess the suitability of the vCloud suite components in that environment and I will also publish separate post on how to do that at a later date. The process of using the appliance to do a VSAN assessment involves the following high level steps.

  1. A VMware employee or a valid channel partner will have access to the VIP portal (https://vip.vmware.com) – Note that the partner would need to sign up to an account free of charge. 2. Create Assessment
  2. Once logged in, the partner can create an assessment for a specific customer by providing some basic details (similar to the VMware capacity planner that was heavily used by VMware partners during early virtualisation days to assess virtualisation and consolidation use cases).
  3. Once the assessment is created, a unique ID for the assessment is generated on the portal.   3. Assessment Settings
  4. VMware partner then adds the customer details and the customer gets an email sent with a link to login to the portal and download an .ova appliance (partner can download it also) 4. Customer Email 5. Customer Logs in 6. Download the collector appliance
  5. The customer or the partner then deploys the appliance in the customer’s vSphere cluster (Note that the appliance can be deployed on any vCenter server / Cluster regardless of the one being monitored, as long as the appliance will have the networking access to the cluster being monitored, including the ESXi servers)
  6. Once the appliance is deployed, you can access the appliance using the https://<IP of the appliance> and do a simple configuration.
    1. Enter the unique assessment key generated (above). This will tie in the deployed appliance to the assessment ID online so that the monitoring and analysis data will be forwarded on to the online portal under that assessment ID. You get to determine how long the data collection should take place for.
    2. It then prompts you to select either a VM migration to vCenter assessment or a full cluster migration assessment to vCenter (I’ve used the cluster migration for the below)
    3. Provide the vCenter address (FQDN) that the collector needs to be registered against to perform the assessment of the VM’s. This could also be the same vCenter that manages the cluster where the appliance deployed or an external vCenter instance. A valid account need to be provided to access the vCenter instance.
    4. During the vCenter registration process, a VIB file would be deployed to all attached ESXi hosts that will enable the monitoring capability. (no downtime required) – Note the below
      1. HTTP/S client ports (80,443) need to be open on the ESXi servers to be able to download the VIB.
      2. According to the deployment notes, ” Histogram analysis and possibly tracefile analysis will be run on these VMs, which will degrade performance by about 5 to 10%, and the hosts will become momentarily unreachable, so be sure not to select VMs that are running very performance sensitive or real-time tasks
    5. Once complete, you’ll be presented with a confirmation window similar to the below which lists out all the VM’s in the cluster7. Appliance config
  7. Data collection from the VM’s in the cluster & forwarding on to the online portal will now begin. Once the data collection is complete, an email notification will be sent. Note that all automated email notifications throughout the process will be sent to both the customer’s named contact as well as the VMware partner contact who set the assessment up within the portal. Given below is a screenshot of the portal once the data collection is completed.
    1. As you can see, its automatically analysed the data and recommended the use of a Hybrid VSAN with 400MB of SSD cache size. (This is based on my lab so the cache size is relatively smaller than what would be recommended in a production environment.           8 Data Collection complete 9 Collection report 2
  8. Once the data collection is complete, data can be directly fed to the VSAN sizer (https://vsantco.vmware.com/vsan/SI/SIEV) to size a potential VSAN solution up which is handy. All you need to do is to click on the button at the bottom that says “Go to VSAN TCO and Sizing Calculator” which will take you to the sizing portal with the data being automatically prefilled for the sizer. 10. Sizing calculator 12. Sizing results
  9.  If you then want to do a TCO comparison to using VSAN Vs traditional HW based SAN, you can go ahead by clicking on the TCO inputs button and providing financial information.    14 1315
  10. Sizing calculator then produces a simple TCO report outlining the cost of VSAN Vs traditional SAN (HW based)   16 17 18 19
  11. I should mention that the above screenshots were based on the default TCO assumptions that include default indicative pricing for various HW SAN’s. I’d encourage that you talk to your reseller / storage vendor to have an independent assessment done using their tools and then use the cost they provide for their SAN solution to update the VSAN OPEX assumptions (as shown below) to get an accurate comparison here in these graphs.             20

Pretty cool ain’t it?

Cheers

Chan

vSphere Troubleshooting Commands & Tools – Summary

I’ve attended the vSphere Troubleshooting Workshop (5.5) this week at EMC Brentford (delivered by VMware Education) and found the whole course and the content covered to be a good refresher in to some key vSphere troubleshooting commands & tools available that I have used often when troubleshooting issues. And since they say sharing is caring,  (with an ulterior motive of documenting it all in one place for my own future reference too), I thought I would summarise the key commands and tools covered in the course, with some additional information all in one place for the easy of reference.

First of all, a brief intro to the course…,

The vSphere Troubleshooting course is not really a course per se, but more a workshop which consist of 2 aspects.

  • 30% Theory content – Mostly consist of quick reminders of each major component that makes up vSphere, their architecture and what can possibly go wrong in their configuration & operational life.

 

  • 70% Actual troubleshooting of vSphere issues – A large number of lab based exercises where you have to troubleshoot a number of deliberately created issues (issues simulating real life configuration issues. Note that performance issues are NOT part of this course). Before each lab, there’s a pre-configured powerCLI script you need to run (provided by VMware) which breaks / deliberately mis-configure something in a functioning vSphere environment and it is then your job to work out what was the root cause and fix it.  Another powerCLI script is run at the end will verify that you’ve addressed the correct root cause and fixed it properly (as VMware intended)

A little more about the troubleshooting itself first, during the lab exercises, you are encouraged to use any method necessary to fix the given issues such as command line, GUI  (web client), VMware KB articles….. But I found the best experience to be to try and stick to command line where possible, which turned out to be a very good way of giving myself a refresher on all the various command line tools and logs available within VMware vSphere,  yet I don’t get to use often in my normal day to day life. I attended this course primarily because its supposed to aid towards the preparation of the VMware VCAP-DCA certification I’m planning to take soon and if you are planning for the same, unless you are in a dedicated 2nd line or 3rd line VMware support role in your daily life where you are bound to know most of the commandlets by heart, I’d encourage you to attend this course too. It wont give you very many silver bullets when it comes to ordinary troubleshooting but it makes you work over and over again with some of the command line tools and logs you previously would have used very occasionally at best.  (for example, I learnt a lot about the various use of esxcli command in the course which was real handy. Before the course, I was aware of the esxcli command, and have used in few times to do couple of tasks but never looked at the whole hierarchy and their application to troubleshoot and fix various vSphere issues)

It may also be important to mention that there’s a dedicated lab on setting up SSL Certificates for communication between all key vSphere components (a very tedious task by the way) which some may find quite useful.

So, the aim of this post is to summarise some key commands covered within the course, in a easy to read hierarchical format which you can use for troubleshooting VMware vSphere configuration issues, all in one place. (if you are an expert of vSphere troubleshooting, I’d advice on taking a rain check on the rest of this post)

The below commands can be run in ESXi shell, vCLI, SSH session or within the vMA (vSphere Management Assistant – highly recommend that you deploy this and integrate it with your active directory)

  • Generic Commands Available

    • vSphere Management Assistant appliance – Recommended, safest way to execute commands
      • vCLI commands
        • esxcli-* commands
          • Primary set of commands to be used for most ESXi host based operations
          • VMware online reference
            • esxcli device 
              • Lists descriptions of device commands.
            • esxcli esxcli
              • Lists descriptions of esxcli commands.
            • esxcli fcoe
              • FCOE (Fibre Channel over Ethernet) commands
            • esxcli graphics
              • Graphics commands
            • esxcli hardware
              • Hardware namespace. Used primarily for extracting information about the current system setup.
            • esxcli iscsi
              • iSCSI namespace for monitoring and managing hardware and software iSCSI.
            • esxcli network
              • Network namespace for managing virtual networking including virtual switches and VMkernel network interfaces.
            • esxcli sched
              • Manage the shared system-wide swap space.
            • esxcli software
              • Software namespace. Includes commands for managing and installing image profiles and VIBs.
            • esxcli storage
              • Includes core storage commands and other storage management commands.
            • esxcli system
              • System monitoring and management command.
            • esxcli vm
              • Namespace for listing virtual machines and shutting them down forcefully.
            • esxcli vsan
              • Namespace for VSAN management commands. See the vSphere Storage publication for details.
        • vicfg-* commands
          • Primarily used for managing Storage, Network and Host configuration
          • Can be run against ESXi systems or against a vCenter Server system.
          • If the ESXi system is in lockdown mode, run commands against the vCenter Server
          • Replaces most of the esxcfg-* commands. A direct comparison can be found here
          • VMware online reference
            • vicfg-advcfg
              • Performs advanced configuration including enabling and disabling CIM providers. Use this command as instructed by VMware.
            • vicfg-authconfig
              • Manages Active Directory authentication.
            • vicfg-cfgbackupBacks up the configuration data of an ESXi system and
              • Restores previously saved configuration data.
            • vicfg-dns
              • Specifies an ESX/ESXi host’s DNS configuration.
            • vicfg-dumppart
              • Manages diagnostic partitions.
            • vicfg-hostops
              • Allows you to start, stop, and examine ESX/ESXi hosts and to instruct them to enter maintenance mode and exit from maintenance mode.
            • vicfg-ipsec
              • Supports setup of IPsec.
            • vicfg-iscsi
              • Manages iSCSI storage.
            • vicfg-module
              • Enables VMkernel options. Use this command with the options listed, or as instructed by VMware.
            • vicfg-mpath
              • Displays information about storage array paths and allows you to change a path’s state.
            • vicfg-mpath35
              • Configures multipath settings for Fibre Channel or iSCSI LUNs.
            • vicfg-nas
              • Manages NAS file systems.
            • vicfg-nics
              • Manages the ESX/ESXi host’s NICs (uplink adapters).
            • vicfg-ntp
              • Specifies the NTP (Network Time Protocol) server.
            • vicfg-rescan
              • Rescans the storage configuration.
            • vicfg-route
              • Lists or changes the ESX/ESXi host’s route entry (IP gateway).
            • vicfg-scsidevs
              • Finds available LUNs.
            • vicfg-snmp
              • Manages the Simple Network Management Protocol (SNMP) agent.
            • vicfg-syslog
              • Specifies the syslog server and the port to connect to that server for ESXi hosts.
            • vicfg-user
              • Creates, modifies, deletes, and lists local direct access users and groups of users.
            • vicfg-vmknic
              • Adds, deletes, and modifies virtual network adapters (VMkernel NICs).
            • vicfg-volume
              • Supports resignaturing a VMFS snapshot volume and mounting and unmounting the snapshot volume.
            • vicfg-vswitch
              • Adds or removes virtual switches or vNetwork Distributed Switches, or modifies switch settings.
        • vmware-cmd commands
          • Commands implemented in Perl that do not have a vicfg- prefix.
          • Performs virtual machine operations remotely including creating a snapshot, powering the virtual machine on or off, and getting information about the virtual machine.
          • VMware online reference
            • vmware-cmd <path to the .vmx file> <VM operations>
        • vmkfstools command
          • Creates and manipulates virtual disks, file systems, logical volumes, and physical storage devices on ESXi hosts.
          • VMware online reference
    • ESX shell / SSH
      • esxcli-* commandlets
        • Primary set of commands to be used for most ESXi host based operations
        • VMware online reference
          • esxcli device 
            • Lists descriptions of device commands.
          • esxcli esxcli
            • Lists descriptions of esxcli commands.
          • esxcli fcoe
            • FCOE (Fibre Channel over Ethernet) commands
          • esxcli graphics
            • Graphics commands
          • esxcli hardware
            • Hardware namespace. Used primarily for extracting information about the current system setup.
          • esxcli iscsi
            • iSCSI namespace for monitoring and managing hardware and software iSCSI.
          • esxcli network
            • Network namespace for managing virtual networking including virtual switches and VMkernel network interfaces.
          • esxcli sched
            • Manage the shared system-wide swap space.
          • esxcli software
            • Software namespace. Includes commands for managing and installing image profiles and VIBs.
          • esxcli storage
            • Includes core storage commands and other storage management commands.
          • esxcli system
            • System monitoring and management command.
          • esxcli vm
            • Namespace for listing virtual machines and shutting them down forcefully.
          • esxcli vsan
            • Namespace for VSAN management commands. See the vSphere Storage publication for details.
      • esxcfg-* commands (deprecated but still works on ESXi 5.5)
        • VMware online reference
      • vmkfstools command
        • Creates and manipulates virtual disks, file systems, logical volumes, and physical storage devices on ESXi hosts.
        • VMware online reference

 

  • Log File Locations

    • vCenter Log Files
      • Windows version
        • C:\Documents and settings\All users\Application Data\VMware\VMware VirtualCenter\Logs
        • C:\ProgramData\Vmware\Vmware VirtualCenter\Log
      • Appliance version
        • /var/log
      • VMware KB for SSO log files
    • ESXi Server Logs
      • /var/log (Majority of ESXi log location)
      • /etc/vmware/vpxa/vpxa.cfg (vpxa/vCenter agent configuration file)
      • VMware KB for all ESXi log file locations
      • /etc/opt/VMware/fdm (FDM agent files for HA configuration)
    • Virtual Machine Logs
      • /vmfs/volumes/<directory name>/<VM name>/VMware.log (Virtual machine log file)
      • /vmfs/volumes/<directory name>/<VM name>/<*.vmdk files> (Virtual machine descriptor files with references to CID numbers of itself and parent vmdk files if snapshots exists)
      • /vmfs/volumes/<directory name>/<VM name>/<*.vmx files> (Virtual machine configuration settings including pointers to vmdk files..etc>

 

  • Networking commands (used to identify and fix network configuration issues)

    • Basic network troubleshooting commands
    • Physical Hardware Troubleshooting
      • lspci -p
    • Traffic capture commands
      • tcpdump-uw
        • Works with all versions of ESXi
        • Refer to VMware KB for additional information
      • pktcap-uw
        • Only works with ESXi 5.5
        • Refer to VMware KB for additional information
    • Telnet equivilent
      • nc command (netcat)
        • Used to verify that you can reach a certain port on a destination host (similar to telnet)
        • Run on the esxi shell or ssh
        • Example:  nc -z <ip address of iSCSI server> 3260 check if the iSCSI port can be reached from esxi to iSCSI server
        • VMware KB article
    • Network performance related commands
      • esxtop (ESXi Shell or SSH) & resxtop (vCli) – ‘n’ for networking

 

  • Storage Commands (used to identify & fix vaious storage issues)

    • Basic storage commands
    • VMFS metadata inconsistencies
      • voma command (VMware vSphere Ondisk Metadata Analyser)
        • Example: voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxx:y (where y is the partition number)
        • Refer to VMware KB article for additional information
    • disk space utilisation
      • df command
    • Storage performance related commands
      • esxtop (ESXi Shell or SSH) & resxtop (vCli) – ‘n‘ for networking

 

  • vCenter server commands (used to identify & fix vCenter, SSO, Inventory related issues)

    • Note that most of the commands available here are Windows commands that can be used to troubleshoot these issues which I wont mention here. Only few key VMware vSphere specific commands are mentioned below instead.
    • SSO
      • ssocli command (C:\Program Files\VMware\Infrastructure\SSOServer\utils\ssocli)
    • vCenter
      • vpxd.exe command (C:\Program Files\VMware\Infrastructure\VirtualCenter Server\vpxd.exe)
      • vpxd

 

  • Virtual Machine related commands (used to identify & fix VM related issues)
    • Generic VM commands
      • vmware-cmd commands (vCLI only)
      • vmkfstools command
    • File locking issues
      • touch command
      • vmkfstools -D command
        • Example: vmkfstools -D /vmfs/volumes/<directory name>/<VM name>/<VM Name.vmdk> (shows the MAC address of the ESXi server with the file lock. it its locked by the same esxi server as where the command was run, ‘000000000000’ is shown)
      • lsof command (identifies the process locking the file)
        • Example: lsof | grep <name of the locked file>
      • kill command (kills the process)
        • Example: kill <PID>
      • md5sum command (used to calculate file checksums)

 

Please note that this post (nor the vSphere Troubleshooting Course) does NOT cover every single command available for troubleshooting different vSphere components but only cover a key subset of the commands that are usually required 90% of the time. Hopefully having them all in one place within this post would be handy for you to look them up. I’ve provided direct links to VMware online documentation for each command above so you can delve further in to each command.

Good luck with your troubleshooting work..!!

Command line rules….!!

Cheers

Chan