New workstation build

Been a while since I’ve posted something here due to various reasons. But time to break that silence with a quick, brief post on the latest workstation build I’ve put together.

Hopefully would be of some help to anyone also looking to upgrade / build a new workstation / gaming rig (I don’t do games as it’s a waste of valuable time IMHO. So it’s very much a workstation but you absolutely can play games on it too)

The idea was to replace my old custom build workstation which was basically an Intel Core i7 6700K 4Ghz beast with 24GB ram with something with few more CPU threads to be able to run a full lab using type 2 virtualisation, while also using the host PC with windows 10 as a workstation. As good as the previous pc was, it just didn’t have the raw CPU power to have 10+ VMS and have about few hundred apps open on the host OS at the same time, along with about 30-40 chrome tabs. (I know I know, very first world problem 🤷‍♂️. A problem nevertheless)

So, after some research, I’ve decided to dump Intel and move to AMD for the first time due to the 7nm goodness available on the AMD EPYC processor family. In order to to fully utilise that horse power, I’ve gone with large enough DDR4 memory (with plenty of room for upgrade thanks to the use of 32GB dimms). I’ve also added one of the fastest, if not the fastest SSD storage modules available in the Sabrent Rocket NVMe pci 4 M.2 drive.

Some Unboxing to do

Given is below is the full spec,

AMD Ryzen Threadripper 3960X Processor (24C/48T, 128 MB Cache, 4.5 GHz Boost)

2 x CORSAIR VENGEANCE RGB PRO 64GB (2x32GB) DDR4 3200 (PC4-25600) C16 Desktop memory – Black – So 128GB in total

Sabrent 1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD

MSI TRX40 PRO 10G Motherboard ‘ATX, TRX40, sTRX4, DDR4, Dual LAN, 10G LAN card, USB 3.2 Gen2, Type-C, M.2 XPANDER-Z Gen4 Card, 3rd Gen AMD RYZEN THREADRIPPER’ – This motherboard domes with different specs and I went with the 10gbe option as I intend to upgrade the network in the house to full 10gbe at some point.

Noctua NH-U14S TR4-SP3, Premium-grade CPU Cooler for AMD sTRX4/TR4/SP3 (140mm, Brown)

Corsair CC-9011098-WW 3 x 120 mm Crystal Series 570X RGB Mid Tower Computer Chassis – Black

Gigabyte Aorus GeForce GTX 1660 OC 6G


Samsung C49RG90SSU – CRG9 Series – QLED monitor – Curved 49″ Dual Quad HD

+ I re used 2 x 26” HD monitors on top of the main curved monitor.

There you have it. It is super quick and super powerful. with some serious load on it, I have not yet managed to go beyond 40% of the CPU or the memory utilisation on it yet so I am pretty sure it’s future proofed me for a while there. It wasn’t cheap, but as an investment in to my primary workstation that I do all my work on, over 10 years, the TCO, relative to the increased productivity I can achieve is very good and manageable.

The full setup

Let me know if this is of any help and if you’ve managed to build anything similar or better.

Prevent & Recover from Ransomware attacks with Veeam

I’ve been watching the ongoing issue affecting Travelex and the said Ransomware infection and thought of putting a short post outlining list of some valuable collateral that shows how customers can use Veeam data protection and management tools to help detect & prevent and recover from such attacks.

The Backdrop

Travelex which is an international Foreign exchange company is currently offline due to a Ransomware infection and are being held to ransom, to the tune of $6m. They are thought to have been infected by “Sodinokibi” Ransomware. More details can be found here. This downtime appears to be forcing Travelex offices to revert to paper based transactions but also affecting a large number of partners such as banks, retail stores …etc all of whom rely on Travelex systems for foreign currency transactions according to this article from Mirror.

The threat

Ransomware threat isn’t new, and it has had notable impact on many organisations in the past. Anyone remember when the wannacry Ransomware infected a large number of UK’s National Health Services computers resulting some significant issues when it comes to public health care?

Given the increased digitization of organisations with more and more inter-connectivity and dependence on digital information for their vital business operations, this threat is significantly higher today than it was yesterday.

Impact of a Ransomware attack can span multiple dimensions, from reputational damage to cost of downtime (lost business) to the cost of the recovery and clean up. These can rack up your total cost of damage to be millions of dollars / Pounds quite easily depending on the size of the organisation.

Typical attack vectors include direct ransomware files, infected drives and infected URL’s when visited by unsuspecting users of the organisations.

Prevention

Prevention is the best form of remediation they say. Typically when it comes to threats like Ransomware, prevention is always preferred given the complexity of remediation and recovery. Defense in depth: a layered defense system consisting of tools (Such as monitoring & Analytics tools, security solutions) and well implemented organisations policies and processes (Role based access, dual factor authentication processes, Periodic account access audits…etc) are some of the best forms preventing Ransomware infections.

Remediation

It is imperative for every organisation today to have a rock solid data management & Data protection solution in place to recover from incidents like these. Backup and Data protection is often overlooked by many as an after thought, but once confronted by a situation like the one facing Travelex today, the importance of these measures are realized quite quickly. A well thought out data protection strategy conforming to the 3-2-1 rule (3 copies of data, stored on 2 separate forms of media such as disk/object storage/tape, with 1 copy being offsite such as another DC in a different geography/Cloud/service provider) is absolutely mandatory for any organisation looking to protect their most important asset today: Their data.

Veeam & Ransomware

As a leader in Data management, both in the cloud and in the data center, Veeam has a number of tools and solutions that can help customers achieve both prevention and remediation from a Ransomware attack. Veeam One monitoring solution for example can track and alert based on unusual activities in the data center such as high CPU usage and increased data change rate (Both common sign of an active ransomware encryption in play) can help vigilent administrators notice in-flight infections and take preventive measures.

A well architected Veeam Backup and Recovery solutions that utilizes disk based snapshots from Veeam alliance partners such as NetApp, Pure Storage, HPe…etc as a part of their backup strategy can immediately revert to the immutable snapshots during recovery with near zero RTO & RPO to resume business operations from an un-infected set of data. Furthermore, Veeam Backup & Recovery can integrate with immutable object storage providers such as those leveraging S3 object lock capability (i.e. Cloudian HyperStore solution) to ensure that your backup data itself is not vulnerable to Ransomware induced malicious encryption. Some Ransomware also target specific files for encryption rather than the full disk and Veeam file level recovery can assist customers quickly restore the relevant files and resume business operations. While these are few key highlights, there are many more capabilities from the Veeam data protection suite that customers can harness to combat the Threat of Ransomware.

Below is a handy list of collateral that can be useful for many customers to understand how Veeam data management solutions can be leveraged to prevent & Recover from Ransomware infections,

And another handy post by one of the Veeam Vanguards here outlines how to benefit from Veeam’s Cloud Tier immutability (use the Chrome translator to translate from Italian to English).

Are you going to wait till something similar happens to your organisation, or would you proactively have the measures put in place now itself to handle the very real threat of Ransomware? Perhaps it’s time to include a Ransomware recovery scenario in to your annual Disaster Recovery and Business Continuity (DR & BC) test plan after all ..!

 

VMware vExpert 2019 & vSAN vExpert 2019

 

The latest batch of VMware vExperts in 2019 has just been announced and I’m glad to say I’ve made the cut for the 5th year running which was fantastic news personally.

The vExpert programme is VMware’s global evangelism and advocacy programme and is held in high regards within the community due to the expertise of the selected vExperts and their contribution towards enabling and empowering customers around the world with their software defined hybrid cloud technology adoption through knowledge sharing.

The candidates are judged on their contribution to the community through activities such as community blogs, personal blogs, participation of events, producing tools…etc.. and in general, maintaining their expertise in related subject matters. vExperts typically get access to private betas, free licenses, early access product briefings & roadmap sessions , exclusive events, free access to VMworld conference materials, and other opportunities to directly interact with VMware product teams which is totally awesome and in return, help us to feed the information back to our customers…

It’s been a great honour to have been recognised by VMware again for this prestigious title as well as the vExpert vSAN title too in 2019. I’d like to thank VMware & Congratulate the other fellow vExperts who have also made it this year too. Keep up the good work with continuing to spread the knowledge …!!

The full list of VMware vExperts 2019 & the specialist vExperts can be found below.

My vExpert profile link is below

Cheers

Chan

NetApp Data Fabric: A la Hybrid Cloud! – An update from NetApp Insight 2018


History

For those of you who have genuinely been following NetApp as a storage company over the years, you may already know that NetApp, contrary to the popular belief as a storage company, has always been a software company at their core. Unlike most of their competitors back in the day such as EMC or even HPe, who were focused primarily on raw hardware capabilities and purpose built storage offerings specific for each use case, NetApp always had a single storage solution (FAS platform) with fit for purpose hardware. However their real strength was in the piece of software they developed on top (Data OnTAP) which offered so many different data services that often would require 2 or 3 different solutions altogether to achieve when it comes to their competition. That software driven innovation kept them punching well beyond their weight to be in the same league as their much bigger competitors.

Over the last few years however, NetApp did expand out their storage offerings to include some additional purpose built storage solutions out of necessity to address many niche customer use cases. They built the E series for raw performance use cases with minimal data services, EF for extreme all flash performance and acquired SolidFire offering which was also a very software driven, scalable storage solution built on commodity HW. The key for most of these storage solution offerings was still the software defined storage & software defined data management capabilities of each platform and the integration of all them through the software technologies such as SnapMirror and SnapVault to move data seamlessly in between these various platform.

In an increasingly software defined world (Public & Private cloud all powered primarily through software), the model of leading with software defined data storage and data management services enables many additional possibilities to expand things out beyond just these Data Center solutions for NetApp, as it turned out.

NetApp Data Fabric

NetApp Data Fabric was an extension of that OnTAP & various other software centric storage capabilities beyond the customer data centers in to other compute platforms such as Public clouds and 3rd party CoLo facilities that NetApp set their vision a while ago.

The idea was that customers can seamlessly move data across all these infrastructure platforms as and when needed without having to modify (think “convert”) the data. NetApp’s Data Fabric at its core, aims to address the data mobility problem caused by platform locking of data, by providing a common layer of core NetApp technologies to host data across all those tiers in a similar manner. In addition, it also aims to provide common set of tools that can be used to manage those data, on any platform, during their lifetime, from the initial creation of data at the Edge location, to processing the data at the Core (DC) and / or on various cloud platforms to then long term storage & archival storage on the core and / or Public cloud platforms. In a way, this provide customers the choice of platform neutrality when it comes to their data which, lets admit it, that is the life blood of most digital (that means all) businesses of today.

New NetApp Data Fabric

Insight 2018 showcased how NetApp managed to extend the initial scope of their Data Fabric vision beyond Hybrid Cloud to new platforms such as Edge locations too, connecting customer’s data across Edge to Core (DC) to Cloud platforms providing data portability. In addition, NetApp also launched a number of new data services to help manage and monitor these data, as they move from one pillar to another across the data fabric. NetApp CEO George Kurian described this new Data Fabric as a way of “Simplifying and integrating orchestration of data services across the Hybrid Cloud providing data visibility, protection and control amongst other features”. In a way, its very similar to VMware’s “Any App, Any device, Any cloud” vision, but in the case of NetApp, the focus is all about the data & data services.

The new NetApp Data Fabric consist of the following key data storage components at each of its pillars.

NetApp Hybrid Cloud Data Storage
  • Private data center
    • NetApp FAS / SolidFire / E / EF / StorageGRID series storage platforms & AltaVault backup appliance. Most of these components now directly integrates with public cloud platforms.
  • Public Cloud
    • NetApp Cloud Volumes        – SaaS solution that provides file services (NFS & SMB) on the cloud using a NetApp FAS xxxx SAN/NAS array running Data OnTAP that is tightly integrated to the native cloud platform.
    • Azure NetApp files        – PaaS solution running on physical NetApp FAS storage solutions on Azure DCs. Directly integrated in to Azure Resource Manager for native storage provisioning and management.
    • Cloud volumes ONTAP        – NetApp OnTAP virtual appliance that runs the same ONTAP code on the cloud. Can be used for production workloads, DR, File shares and DB storage, same as on-premises. Includes Cloud tiering and Trident container support as well as SnapLock for encryption.
  • Co-Lo (Adjacent to public clouds)NetApp private storage        – Dedicated, Physical NetApp FAS (ONTAP) or a FlexArray storage solution owned by the customer, that is physical adjacent to major cloud platform infrastructures. The storage unit is hosted in an Equinix data center with direct, low latency 10GBe link to Azure, AWS and GCP cloud back ends. Workloads such as VMs and applications deployed in the native cloud platform can consume data directly over this low latency link.
  • Edge locationsNetApp HCI            – Recently repositioned as a “Hybrid Cloud Infrastructure” rather than a “Hyper-Converged Infrastructure”, this solution provides a native NetApp compute + Storage solution that is tightly integrated with some of the key data services & Monitoring and management solutions from the Data Fabric (described below).

Data Fabric + NetApp Cloud Services

While the core storage infrastructure components of Data Fabric enables data mobility without the need to transform data across each hop, customers still need the tools to be able to provision, manage, monitor these data on each pillar of the data fabric. Furthermore, customers would also need to use these tools to manage the data across non NetApp platforms that are also linked to the Data Fabric storage pillars described above (such as native cloud platforms).

Insight 2018 (US) revealed the launch of some of these brand new data services & Tool from NetApp most of which are actually SaaS solutions hosted and managed by NetApp themselves on a cloud platform. While some of these services are fully live and GA, not all of these Cloud services are live just yet, but customers can trial them all free today.

Given below is a full list of the announced NetApp Cloud services that fall in to 2 categories. By design, these are tightly integrated with all the data storage pillars of the NetApp Data Fabric as well as other 3rd party storage and compute platforms such as AWS, Azure and 3rd party data center components.

NetApp Hybrid Cloud Data Services (New)

  • NetApp OnCommand Cloud Manager    – Deploy and manage Cloud Volumes ONTAP as well as discover and provision on-premises ONTAP clusters. Available as a SaaS or an on-premises SW.
  • NetApp Cloud Sync            – A NetApp SaaS offering that enables easier, automated data migration & synchronisation across NetApp and non NetApp storage platforms across the hybrid cloud. Currently supports Syncing data across AWS (S3, EFS), Azure (Blob), GCP (Storage bucket), IBM (Object storage) and NetApp StorageGRID.
  • NetApp Cloud Secure            – A NetApp SaaS security tool that aim to identify malicious data access across all Hybrid Cloud storage solutions. Connects to various storage back ends via a data collector and support NetApp Cloud Volumes, OnTAP, StorageGRID, Microsoft OneDrive, AWS, Google GSuite, HPe Command View. Dropbox, Box, Workplace and Office 365 as end points to be monitored. Not live yet and more details here.
  • NetApp Cloud Tiering            – Based on ONTAP Fabric Pools, enables direct tiering of infrequently used data from an ONTAP solution (on premises or on cloud) seamlessly to Azure blob, AWS S3 and IBM Cloud Object Storage. Not a live solution just yet but a technical preview is available.
  • NetApp SaaS Backup            – A NetApp SaaS backup solution for backing up Office 365 (Exchange online, SharePoint online, One drive for business, MS Teams and O365 Groups) as well as Salesforce data. Formerly known as NetApp Cloud Control. Can back up data to native storage or to Azure blob or AWS S3. Additional info here.
  • NetApp Cloud backup            – Another NetApp SaaS offering, purpose built for backing up NetApp Cloud Volumes (described above)
NetApp Cloud Management & Monitoring (New)
  • NetApp Kubernetes service        – New NetApp SaaS offering to provide enterprise Kubernetes as a service. Built around the NetApp acquisition of Stackpoint. Integrated with other NetApp Data Fabric components (NetApp’s own solutions) as well as public cloud platforms (Azure, AWS and GCP) to enable container orchestration across the board. Integrates with NetApp TRIDENT for persistent storage vlumes.
  • NetApp Cloud Insights            – Another NetApp SaaS offering built around ActiveIQ, that provides a single monitoring tool for visibility across the hybrid cloud and Data Fabric components. Uses AI & ML for predictive analytics, proactive failure prevention, dynamic topology mapping and can also be used for resource rightsizing and troubleshooting with infrastructure correlation capabilities.

My thoughts

In the world of Hybrid Cloud, customer data, from VMs to file data can now be stored in various different ways across various data centers, various different Edge locations and various different Public cloud platforms, all underpinned by different set of technologies. This presents an inevitable problem for customers where their data requires transformation each time it gets moved or copied across from one pillar to another (known as platform locking of data). This also means that it is difficult to seamlessly move that data across those platforms during its life time should you want to benefit from every pillar of the Hybrid cloud and different benefits inherent to each. NetApp’s new strategy, powered by providing a common software layer to store, move and manage customer data, seamlessly across all these platforms can resonate well with customers. By continuing to focus on the customer’s data, NetApp are focusing on the most important asset organisations of today, and most definitely the organisations of tomorrow, have. So enabling their customers to avoid un-necessary hurdles to move this asset from one platform to another is only going to go down well with enterprise customers.

This strategy is very similar to that of VMware’s for example (Any App, Any Device, Any Cloud) that aim to also address the same problem, albeit with a more application centric perspective. To their credit, NetApp is the only “Legacy Storage vendor” that has this all-encompassing strategy of having a common data storage layer across the full hybrid cloud spectrum where as most of their competition are either still focused on their data centre solutions with limited or minor integration to cloud through extending backup and DR capabilities at best.

Only time will tell how successful this strategy would be for NetApp, and I suspect most of that success or the failure will rely on the continued execution of this strategy successfully through building additional data and data management services and their positioning to address various Hybrid cloud use cases. But the initial feedback from the customers appears to be positive which is good to see. Being focused on the software innovation has always provided NetApp with an edge over their competitors and continuing on that strategy, especially in an increasingly software defined world is only bound to bring good things in my view.

Slide credit to NetApp & Tech Field Day!

Continuation of Any Cloud, Any Device & Any App strategy – An update from VMworld 2018 Europe

The beginning

As an avid technologist, I’ve always had a thing for disruptive technologies, especially those that are not just cool tech but also provide genuine business benefits. Some of these benefits are obvious at first, but some are often not even anticipated until after a technology innovation has been achieved.

VMware’s inception: Through the emulation of X86 computing components within software was one of these moments where the power of software driven computing started a whole new shift in the IT industry. In an age of Hardware centric IT, this software defined computing technology paved way to achieve genuine cost savings through consolidation of multiple servers in to a handful of servers instead. For me back then as a lowly server engineer, introduction to this technology was one of those “goose bump” moments, especially when I thought about the possibilities of where this technology innovation could take us going forward, especially when that’s extended beyond just computing.

Fast forward about 12 more years, the software defined capabilities extended beyond compute in to storage and networking too, paving the way for brand new possibilities such as cloud computing. Recognising the commoditisation of this software defined approach by various other vendors, VMware strategically changed their direction to focus on building tools and solutions that provide customers the choice to run any application, on any cloud platform, accessible by any end user device (PC & Mobile). This strategy was launched back in 2015 and I’ve blogged about it here.

Continuation of a solid strategy

Following on from vSphere, vSAN and NSX as pillars of core software defined data center (SDDC), last couple of years showed how this vision from VMware was coming in to reality through the launch of various new solutions as well as modernisation of exiting solutions. IBM cloud (based on SDDC) & VMware Cloud on AWS (based on SDDC) were launched to harness cloud computing capabilities for customers without having to re-platform their workloads saving transformation costs. Along with over 2000 VMware Cloud Provider partner platforms (built on SDDC) all of whom that runs these very same technologies underneath their cloud platforms, this common architecture enabled customers to easily move their workload from on premises to any of these platforms relatively easily. Introduction of technologies such as VMware HCX last year made it even easier through one click migration of these workloads as well as the ability to move a running workload on to a cloud platform with zero downtime (Cloud motion).

In addition to the core infrastructure components, the existing infrastructure management and monitoring toolset deployed on-premises (vRealize suite) was also revamped over the last few years such that they can manage and monitor these environments across all these cloud platforms. vRealize suite was now one of the best Cloud Management Platforms that could provision workloads on to on-prem & on native cloud platforms such as AWS and Azure providing a single pane of glass.

NSX capabilities were also extended to cloud platforms to effectively bring cloud platforms closer to on-premises data centers via network adjacency providing customers easy migration and fall back choices while maintaining networking integrity across both platforms. With these updates, the vision of “Any Cloud” became more of a reality, though most of the use cases were limited to IaaS capabilities across the cloud platforms.

During last year, VMware also launched a number of fully managed, born in the cloud SaaS applications under the category of VMware Cloud Services (v1.0) aimed at extending this “Any Cloud” capabilities to cover none IaaS platforms. These SaaS offerings enabled ability to provision, manage and run cloud native workloads on none vSphere based cloud platforms such as Azure and native AWS platforms. These extended the “Any cloud” capabilities right in to various PaaS platforms too enabling better value to customers. A list of these new solutions and updates were listed on my previous post here.

Last few years also showed us how VMware intended on achieving the “Any Device” vision through the Workspace One platform & Air Watch. Incremental feature upgrades ensured that support for a wide array of end user computing and mobile devices to consume various enterprise IT services in a consistent, secure manner, regardless of where the applications & the data are hosted (on-premises or cloud). These updates include support for key none vSphere based cloud platforms and even competitive technologies such as Citrix providing customers plenty of choice to use any device of their choice to access applications hosted via all major avenues such as Mobile / PC / VDI / Citrix / Microsoft RDS.

“Any App” vision of enabling customers deploy and run any application was all about providing support for traditional (VM) based apps, micro-services based apps (containers) and SaaS apps. The partnership with Google for the implementation formed and new products such as PKE were also launched to provision, manage and run container workloads via an enterprise grade Kubernetes platform, both on premises as well as on cloud platforms, making the Any App strategy also a reality.

Update in 2018!

2018’s VMworld (Europe) messaging was very much an incremental continuation of this same multi-platform, multi app and multi device strategy, adding additional capabilities for core use cases. Some of the new updates also showed how VMware are also adding new use cases such as Edge computing and IoT solutions in to the mix.

Some of the key updates to note from VMworld 2018 include,

  • Heptio acquisition:    To strengthen the VMware’s Kubernetes platform offerings (Complements on-premises focused PKS as well as a SaaS offering for managed Kubernetes in VKE)
  • VMware Cloud PKS:    PKS as a Service (managed by VMware) on AWS with support coming for VMware Cloud on AWS, Azure, GCP and vSphere
  • Project Dimension:    Fully managed VMware Cloud Foundation solution for on-premises with Hybrid Cloud control plane. Beta announced!
  • Launch of VCF 3.5:    Latest version of Cloud Foundation with incremental updates and cloud integration via HCX.
  • CloudHealth in VCS:    Integration of recently acquired CloudHealth in to the VMware cloud services (SaaS offering) portfolio which now extends the cloud platform cost monitoring and resource management as a SaaS offering with better cloud scalability than vROPs
  • Pulse IoT center aaS:    IoT Infrastructure management solution previously available as an on-premises solution now available as a service. Beta announced!
  • New SaaS solutions:    Additional solutions are announced such as Cloud Assembly (vRA aaS), Service broker & Code stream to enhance DevOps app delivery & management.
  • VMware Blockchain:    Enterprise blockchain service inherently more secure than public blockchain that is integrated to underlying VMware tools and technologies for enterprises to consume.

Amongst these, there were also other minor incremental updates to existing tools and solutions such as vRealize suite 2018, Log Intelligence, Wavefront updates to provide application telemetry data (similar to App Dynamics) from container based deployments, vSphere & vSAN incremental updates, availability of vSphere platinum edition (with bundled in AppDefense) that learn (Good app behaviour), lock (the state in) and adapts security (based on changes to the application), Adaptive micro-segmentation via integrating NSX & AppDefense, Increased geo availability of VMware Cloud on AWS (Ireland, Tokyo, N California, Ohio, Gov clud west), availability of AWS RDS on vSphere on premises to name few.

In addition to the above based on the previously established Any Cloud, Any Device & Any App strategy, VMware are also embracing different target markets such as Telco clouds by offering industry specific solutions through the use of their VeloCloud technologies, in preparation for the 5G revolution that is imminent in the industry and large telco Vodafone are helping VMWare co-engineer and test these solutions to ensure their business relevance.

So all in all, there weren’t any attention grabbing headline announcements in this year’s VMworld event, but the focus was rather on providing evidence of the execution of that strategy set back in 2015/2016. VMware’s increasing pivoting to Cloud based solutions is becoming more and more obvious as almost all the net new products and solutions announced within 2017 and 2018 VMworlds are all SaaS offerings managed by VMware. This is a powerful message and customers seem to take note too, if the record breaking 12,000 attendees of VMworld 2018 Europe is anything to go by.

As I mentioned at the beginning of this post, as these technology updates and new innovation is continuing, no doubt there will be additional use cases being realised, and the associated business requirements previously not envisioned being established. In an age of rapid advancements of technology that often driving new business requirements retrospectively, I like how VMware are pushing ahead with a coherent technology strategy focused on providing customer the choice to benefit from innovations across these technology platforms.

Tech Field Day 17

This post was republished to ChansBlog at 19:48:55 12/10/2018

Tech Field Day 17

Having attended the Storage Field Day 15 back in March, I’ve been lucky enough to be invited to also attend not only Tech Field Day 17 but also the Tech Field Day Extra at NetApp Insight 2018 (US) too this month. This post is a quick intro about the event and the schedules ahead.

Watch LIVE!

Below is the live streaming link to the event on the day if you’d like to join us LIVE. While the time difference might make it a little tricky for some, it is well worth taking part in as all the viewers will also have the chance to ask questions from the vendors live, similar to the delegates onset. Just do it, you won’t be disappointed!

TFD – Quick Introduction!

Tech Field Day is a an invitees only series of events organised and hosted by Gestalt IT (GestaltIT.com) to bring together innovative technology solutions from various vendors (The “Sponsors”) who will be presenting their solutions to a room full of independent technology bloggers and thought leaders (The “delegates”), chosen from around the world based on their knowledge, community profile and thought leadership, in order to get their independent thoughts (good or bad) of the said solutions. The event is also streamed live worldwide for anyone to tune in to and is often used by various technology start-ups to announce their arrival to the mainstream markets. It’s organised by the chief organiser Stephen Foskett (@Sfoskett) and has always been an extremely popular event amongst the vendors as it provides an ideal opportunity for them to present their new products and solutions as well as new start-ups coming out of stealth announcing their wares to the world. It is equally popular amongst the attending delegates who gets the opportunity, not only to witness brand new technology at times, but also be able to critique and express their valuable feedback in front of these vendors.

TFD17 – Schedule & Vendor line-up

SFD15 is due to take place in the Silicon Valley between the 17-19th of October 2018. The planned vendor line up and timing are as follows

Wednesday the 17th of October

1pm-3pm (9-11pm UK time)

Thursday the 18th of October

8am-10am (4-6pm UK time) 11am-1pm (7-9pm UK time) 3-5pm (11pm-1am* UK time)

Friday 19th of October

11am-1pm (7-9pm UK time)

TFD Extra – Schedule TBC (NetApp Insight 2018 US)

  • Monday the 22nd of October:
    • NetApp Insight general events
  • Tuesday the 23rd of October:
    • 8:30-10am Vegas time / 4:30-6pm UK time : General session keynote
    • Morning: Analysts summit general session
    • Afternoon: TFD Extra session
  • Wednesday the 24th of October:
    • 8:30-10am Vegas time / 4:30-6pm UK time : General session
    • Morning: TFD Extra session
    • Afternoon: TFD Extra session

Previous Field Day event Posts

I’ve learnt a lot during the previous SFD15 participation earlier this year about the storage industry in general as well about the direction of a number of storage vendors. If you are interested in finding out more, see my #SFD15 articles below

VMware Cloud on Azure? Really?

I work for a global channel partner of Microsoft, VMware & AWS  and one of the teammates recently asked me the question whether VMware Cloud on Azure (similar solution to VMware Cloud on AWS) would be a reality? It turned out that this was on the back of a statement from VMware CEO Pat where he supposedly mentioned “We have interest from our customers to expand our relationships with Google, Microsoft and others” & “We have announced some incremental expansions of those agreements“, which seems to have been represented in a CNBC article as that VMware cloud is coming to  Azure (Insinuating the reality of vSphere on Azure bare metal servers).

I’d sent my response back to the teammate outlining what I think of it and the reasoning for my thought process but I thought it would be good to get the thoughts of the wider community also as its a very relevant question for many, especially if you work in the channel, work for the said vendors or if you are a customer currently using the said technologies or planning on to moving to VMware Cloud on AWS.

Some context first,

I’ve been following the whole VMWare Cloud on Azure discussion since it first broke out last year and ever since VMware Cloud on AWS (VMWonAWS) was announced, there were some noise from Microsoft, specifically Corey Sanders (Corporate vice president of Azure) about their own plans to build a VMWonAWS like solution inside Azure data centers. Initially it looked like it was just a publicity stunt from MSFT to steal the thunder from AWS during the announcement of VMConAWS but later on, details emerged that, unlike VMWonAWS, this was not a jointly engineered solution between VMware & Microsoft, but a standalone vSphere solution running on FlexPod (NetApp storage and Cisco UCS servers) managed by a VMware vCAN partner who happened to host their solution in the same Azure DC, with L3 connectivity to Azure Resource Manager. Unlike VMWonAWS, there were no back door connectivity to the core Azure services, but only public API integration via internet. It was also not supposed to run vSphere on native Azure bare metal servers unlike how it is when it comes to VMWonAWS.

All the details around these were available on 2 main blog posts, one from Corey @ MSFT (here) and another from Ajay Patel (SVP, cloud products at VMware) here but the contents on these 2 articles have since been changed to either something completely different or the original details were completely removed. Before Corey’s post was modified number of times, he mentioned that they started working initially with the vCAN partner but later on, engaged VMware directly for discussions around potential tighter integration and at the same time, Ajay’s post (prior to being removed) also corroborated with the same. But none of that info is there anymore and while the 2 companies are likely talking behind the scene for some collaboration no doubt, I am not sure whether its safe for anyone to assume they are working on a VMWonAWS like solution when it comes to Azure.  VMWonAWS is a genuinely integrated solution due to months and months of joint engineering and while VMware may have incentives to do something similar with Azure, it’s difficult to see the commercial or the PR benefit of such a joint solution to Microsoft as that would ruin their exiting messaging around AzureStack which is supposed to be their only & preferred Hybrid Cloud solution.

My thoughts!

In my view, what Pat Gelsinger was saying above when he says (“we have interest from our customers to expand our relationship with Microsoft and others”) likely means something totally different to building a VMware Cloud on Azure in a way that runs vSphere stack on native Azure hardware. VMware’s vision has always been Any Cloud, Any App, Any device which they announced at VMWorld 2016 (read the summary http://chansblog.com/vmworld-2016-us-key-annoucements-day-1/) and the aspiration (based in my understanding at least) was to be the glue between all cloud platforms and on-premises which is a great one. So when it comes to Azure, the only known plans (which are probably what Pat was alluding to below) were the 2 things as per below,

  • To use NSX to bridge on-premises (& other cloud platforms) to Azure by extending network adjacency right in to the Azure edge, in a similar way to how you can stretch networks to VMWonAWS. NSX-T version 2.2.0 which GA’d on Wednesday the 6th of June can now support creating VMware virtual networks in Azure and being able to manage those networks within your NSX data center inventory. All the details can be found here. What Pat was probably doing was setting the scene for this announcement but it was not news, as that was on the roadmap for a long time since VMworld 2016. This probably should not be taken as VMware on Azure bare metal is a reality, at least at this stage.
  • In addition to that, the VMware Cloud Services (VCS – A SaaS platform announced in VMworld 2017 – more details here) will have more integration with native AWS, native Azure and GCP which is also what Pat is hinting here when he says more integration with Azure, but that too was always on the roadmap.

At least that’s my take on VMware’s plans and their future strategy. Things can change in a flash as the IT market is full of changes these days with so many competitors as well as co-petitors. But I just cant see, at least in the immediate future, there being a genuine VMware Cloud on Azure solution that runs vSphere on bare metal Azure hardware, that is similar to VMWonAWS, despite what that article from CNBC seems to insinuate.

What do you all think? Any insiders with additional knowledge or anyone with a different theory? Keen to get people’s thoughts!

Chan

VMware vSAN vExperts 2018

I’ve just found out that I’ve been selected to be a vSAN vExpert again this year which was great news indeed. The complete list of vSAN vExperts 2018 can be found at https://blogs.vmware.com/vmtn/2018/06/vexpert-vsan-2018-announcement.html

vSAN vExpert programme is a sub programme of the wider VMware vExpert programme where out of those already selected vExperts, people who have shown specific speciality and thought leadership around vSAN & related Hyper-Converged technologies are being recognised specifically for their efforts. vSAN vExpert programme only started back in 2016 and while I missed out during the first year, I was also a vSAN vExpert in 2017 too so it’s quite nice to have been selected again for 2018.

As a part of the vSAN vExpert program, selected members typically are entitled to a number of benefits such as NFR license keys for full vSAN suite for lab and demo purposes, access to vSAN product management team at VMware, exclusive webinars & NDA meetings, access to preview builds of the new software and also get a chance to provide feedback to the product management team on behalf of our clients which is great for me as technologist working in the channel.

I have been a big advocate of Software Defined everything for about 15 years now as, they way I saw it, the power in most technologies are often derived from software. Public cloud is the biggest testament for this we can see today. So when HCI became a “thing”, I was naturally a big promoter of the concept and realistically, the Software Defined Storage (SDS) which made HCI what it is, was something I’ve always seen the value in. While there are many other SDS tech have started to appear since then, vSAN was always something unique in that it’s more tightly coupled to the underlying hypervisor like no other HCI / SDS solution and this architectural difference was the main reason why I’ve always liked and therefore promoted the vSAN technology from beta days. Well, vSAN revenue numbers have grown massively for VMware since its first launch with vSAN 5.5 and now, the vSAN business unit within VMware is a self sufficient business in its own right. Since I am fortunate to be working for a VMware solutions provider partner here in the UK, I have seen first hand the number of vSAN solutions we’ve sold to our own customers have grown over 900% year on year between 2016 and 2017 which fully aligns with wider industry adoption of vSAN as a preferred storage option for most vSphere solutions.

This will only likely going to increase and some of the hardware innovation coming down the line such as Storage Class Memory integration and NVMe over Fabric technologies will further enhance the performance and reliability of genuinely distributed software defined storage technologies such as vSAN. So being recognised as a thought leader and a community evangelist for vSAN by VMware is a great honour as I can continue to share my thoughts, updates on the product development with the wider community for other people to benefit from.

So thank you VMware for the honour again this year, and congratulations for all the others who have also been selected to be vSAN vExperts 2018. Keep sharing your knowledge and thought leadership content…!

Chan

NetApp & Next Generation Storage Technologies

There are some exciting technology developments taking place in the storage industry, some behind closed doors but some that are also publicly announced and already commercially available that most of you may already have come across. Some of these are organic developments to build on existing technologies but some are inspired by megascalers like AWS, Azure, GCP and various other cloud platforms. I’ve been lucky enough to be briefed on some of these when I was at SFD12 last year I the Silicon Valley, by SNIA – The Storage and Networking Industry Association that I’ve previously blogged about here.

This time around, I was part of the Storage Filed Day (SFD15) delegate panel that got a chance to visit NetApp at their HQ at Sunnyvale, CA to find out more about some of exciting new product offerings that are in NetApp’s roadmap, either in the works or starting to just come out, incorporating some of these new storage technologies. This post aim to provide a summary of what I learnt there and my respective thoughts.

Introduction

It is no secret that Flash media has changed the dynamics of the storage market over the last decade due to their inherent performance characteristics. While the earliest incarnations of flash media were prohibitively expensive to be used in mass quantities, the invention of SSDs commoditised the use of flash media across the entire storage industry. For example, most tier 1 workloads in the enterprises today are held on a SSD backed storage system where SSD disk drives form the whole or a key part of the storage media stack.

When you look at some of the key storage solutions in use today, there are 2 key, existing flash technologies that stand out, DRAM & SSD. DRAM is the fastest possible flash storage media that is most easily accessible by the data processing compute subsystem while SSD’s fall in to next best place when it comes to speed of access and the level of performance (IOPS & bandwidth). As such, most enterprise storage solutions in the world, be that the ones aimed at the customer data centers or on the megascaler’s cloud platforms utilise one or both of these flash media types to either accelerate (caching) or simply store tier 1 data sets.

It is important to note that, while the SSD’s benefitted from the overall higher performance and lower latency compared to mechanical drives due to the internal architecture of the SSD disks themselves (flash storage cells that don’t require spinning magnetic media), both the SSD drives and classic mechanical (spinning) drives are typically attached & accessed by the compute subsystem via the same SATA or the SaS interface subsystem with the same interface speed & latency. Often the internal performance of an SSD was not fully realised to its maximum potential, especially in an aggregated scenario like that of an enterprise storage array, due to these interface controller access speed and latency limitations, as illustrated in the diagram below.

One of the more recent technology developments in the storage and compute industry, namely “Non-Volatile Memory Express” (NVMe) aims to address these SAS & SATA interface driven performance and the latency limitations through the introduction of new, high performance host controller interface that has been engineered from the ground up to be able to fully utilise flash storage drives. This new NVMe storage architecture is designed to be future proof and would be compatible with various future disk drive technologies that are NAND based as well as non-NAND based storage media.

NVMe SSD drives connected via these NVMe interfaces will not only outperform traditional SSD drives attached via SAS or SATA, but most importantly will enable higher future capabilities such as being able to utilise Remote Direct Memory Address (RDMA) for super high storage performance extending the storage subsystem over a fabric of interconnected storage and compute nodes. A good introduction to the NVMe technology and its benefits over SAS / SATA interfaces can be viewed here.

Another much talked about development on the same front is the subject of the Storage Class Memory (SCM) – Also known as Persistent Memory (PMEM). SCM is an organic successor to the NAND technology based SSD drives that we see in mainstream use in flash accelerated as well as all flash storage arrays today.

At a theoretical level, SCM can come in 2 main types as shown in the above diagram (from a really great IBM research paper published in 2013).

  • M-Type SCM (Synchronous) = Incorporate non-volatile memory based storage in to the memory access subsystem (DDR) rather than SCSI block based storage subsystem through PCIe, achieving DRAM like throughput and latency benefits for persistent storage. Typically take the form of NVDIMM (that is attached to the memory BUS, similar to traditional DRAM) which is the fastest and best performant thing, next to DRAM itself. Uses memory card slots and appear to the system to use as a caching layer or as pooled memory (extended DRAM space) depending on the NVDIMM type (NVDIMMs come in 3 types, NVDIMM-N, NVDIMM-F and NVDIMM-P. A good explanation available here).
  • S-Type SCM (Asynchronous) = Incorporate non-volatile memory based storage but attached via the PCIe connector to the storage subsystem. While this is theoretically slower than the above, it’s still significantly faster than NAND based SSD drives that are in common use today, including those attached via NVMe host controller interface. Intel and Samsung both have already launched S-type SCM drives, Intel with their 3D XPoint architecture and Samsung with Z-SSD respectively but current drive models available are aimed more at consumer / workstation rather than server workloads. Server based implementations of similar SCM drives will likely arrive around 2019. (Along with supported server based software included within operating systems such as Hypervisors – vSphere 7 anyone?)

The idea of the SCM is to address the latency and performance gap that exist in every computer system when it comes to memory and storage since the advent of X86 computing. Typically, access latency for DRAM is around 60ns, and the next best option today, NVMe SSD drives will have a typical latency of around 20-200us and the SCM will fit in between these 2, at a typical latency between 60ns-20uS, depending on the type of the SCM, with a significantly high bandwidth that is incomparable to SSD drives. It is important to note however that most ordinary workloads do not need this type of super latency sensitive, extremely high bandwidth storage performance, the next generation data technologies involving Artificial Intelligence techniques such as machine learning, real-time analytics that relies on processing extremely large swathes of data at super quick time, would absolutely benefit, and in most instance, necessitate the need for these next gen storage technologies to be fully effective.

NetApp’s NVMe & SCM vision

NetApp was one of the first classic storage vendors who incorporate flash in to their storage systems, in an efficient manner to accelerate the workloads that is typically stored on spinning disks. This started with the concept of NVRAM that was included in their flagship FAS storage solutions as an acceleration layer. Then came the flash cache (PAM cards) which were flash media attached via the PCIe subsystem to act as a cashing layer for reads which was also popular. Since the advent of all flash storage arrays, NetApp went another step by introducing all flash storage in to their portfolio through the likes of All Flash FAS platform that was engineered and tuned for all flash media as well as the EF series.

NetApp innovation and constant improvement process hasn’t stopped there. During SFD15 event, we were treated to the next step of this technology evolution by NetApp when they discussed how they plan to incorporate the above mentioned NVMe and SCM storage technologies in to their storage portfolio, in order to provide next gen storage capabilities to serve next gen use cases such as AI, big data and real-time analytics. Given below is a holistic roadmap plan of where NetApp see NVMe and SCM technologies fitting in to their roadmap, based on the characteristics, benefits and costs of each technology.

The planned use of NVMe is clearly in 2 different points of the host->storage array communication path.

  • NVMe SSD drives : NVMe SSD drives in a storage array, attached via NVMe host controller interface in order to be able to fully utilise the latency and throughput potential of the SSD drives themselves by the storage processor (in the controllers). This will provide additional performance characteristics to the existing arrays.
  • NVMe-OF : NVMe over fabric which is attached to the storage consumer nodes (Servers) via a ultra-low latency NVMe fabric. NVMe-OF enable the use of RDMA capabilities to reduce the distance between the IO generator and the IO processor thereby significantly reducing the latency. NVMe-OF therefore is widely touted to be the next big thing in storage industry and a number of specialists start-ups like Excelero have already come out to market with specialist solutions and you can find out more about it in my blog here. An example of the NVMe-OF storage solution available from NetApp is the new NetApp EF570 all flash array. This product is already shipping and more details can be found here or here. This platform offers some phenomenal performance numbers at ultra-low latency, built around their trusted, mature, feature rich, yet simple EF storage platform which is also a bonus.

The planned (or experimented) use of SCM is in 2 specific areas of the storage stack, driven primarily by the costs of the media vs the need for acceleration.

  • Storage controller side caching:        NetApp mentioned that some of the experiments they are working on with prototype solutions already built are looking at using SCM media on the storage controllers as a another tier to accelerate performance, in the same way PAM cards or Flash cache was used on the older FAS system. This a relatively straight forward upgrade and would be specially effective in an all flash FAS solution with SSD drives in the back end where a traditional flash cache card based on NAND cells would be less effective.
  • Server (IO generator) side caching:        This use case looks at using the SCM media on the host compute systems that generates the IO to act as a local cache, but most importantly, used in conjunction with the storage controllers rather than in isolation, performing tiering and snapshots from the host cache to a backend storage system like an All Flash FAS.
  • NetApp are experimenting on this front primarily using their recent acquisition of Plexistor and their proprietary software that performs the function of combining DRAM and SCM as a single address space that is byte addressable (via memory semantics which is much faster than scsi / NVMe addressable storage) and presenting that to the applications as a cache while also presenting the backend NetApp storage array such as an All Flash FAS as a persistent storage tier. The applications achieve significantly lower latency and ultra-high throughput this way through caching the hot data using the Plexistor file system which incidentally bypasses the complex Linux IO stack (Comparison below). The Plexistor tech is supposed to provide enterprise grade feature as a part of the same software stack though the specifics of what those enterprise grade features meant were lacking (Guessing the typical availability and management capabilities as natively available within OnTAP?)

Based on some of the initial performance benchmarks, the effect of this is significant, as can be seen below when compared to a normal

My thoughts

As an IT strategist and an Architect at heart with a specific interest in storage who can see super data (read “extremely large quantities of data”) processing becoming a common use case soon across most industries due to the introduction of big data, real-time analytics and the accompanying Machine Learning tech, I can see value in this strategy from NetApp. Most importantly, they are looking at using these advanced technologies in harmony with some the proven, tried and tested data management platforms they already have in the likes of OnTAP software could be a big bonus. The acquisition of Plexistor was a good move for NetApp and integrating their tech and having a shipping product would be super awesome if and when that happens but I would dare say the use cases would be somewhat limited prohibitive initially given the Linux dependency. Others are taking note and the HCI vendor Nutanix’s acquisition of PernixData kind of hints Nutanix also having a similar strategy to that of Plexistor and NetApp.

While the organic growth of current product portfolio with capabilities through incorporating new tech such as NVMe is fairly straight forward and help NetApp stay relevant, it remains to be seen however how well acquisition driven integration such as that of Plexistor with SCM technologies to the NetApp platform would pan out to become a shipping product. NetApp has historically had issues around the efficiency of this integration process which in the past has known to be slow but this time around, under the new CEO George Kurian who brought in a more agile software development methodology and therefore, a more frequent feature & update release cycle, things may well be different this time around. The evidence seen during SFD15 pretty much suggest the same to me which is great.

Slide credit to NetApp!

Thanks

Chan

NetApp United 2018 – No it’s not another football team!

I was glad to see an email from the NetApp united team this afternoon confirming that I’ve been selected as a member of the prestigious NetApp United (#NetAppUnited) team for 2018 which is a great honour indeed. Thanks NetApp!

Contrary to popular belief – NetApp United is NOT a football team but global community of individuals united by the passion for great technology. Similar to the VMware vExpert and Dell EMC elect programmes, NetAppUnited is a community programme run by NetApp (@PeytStefanova is the organiser in chief) to recognise global NetApp technology experts and community influencers with a view to giving them a platform to share more of their thoughts, contents, influence and ultimately share more of their expertise publicly though various community channels. Similar to the other community programs from other vendors, NetApp united is all about giving back to the community which is a good cause and I was happy to support.

Being recognised a member of the NetApp United program entitles you to a number of exclusive benefits such as dedicated NetApp technology update sessions with product engineers, exclusive briefings about future and upcoming NetApp solutions and products, Access to a private slack channel for the community members to discuss all things technical and related to NetApp and other exclusive events at NetApp Insight events in US and EMEA. All of these perks are nice to have indeed as they enable us to share some of those information with the others out there as well as provide our own thoughts which would be beneficial for current or future NetApp customers out there.

As I work for a global NetApp partner, I am looking forward to using the access to information I have as a part of this program to better leverage our partnership with NetApp as well as to educate our joint customers on future NetApp strategy. As I am also an independent contributor (outside of work), I intend to share some of the information (outside of NDA stuff) with my general audiences to help you understand various NetApp solutions, strategy and my independent thoughts on them which I think is important. I have been working with NetApp for a long time, initially as a customer and then as a partner where I’ve always been a great fan of their core strategy which was always about Software, despite being a HW product manufacturer. They have some extremely awesome innovation already available in their portfolio and even better innovation in the making for future (Have a look at the recently concluded #SFD15 presentation from them about the Data Pipeline vision here) and I am looking forward to sharing some of these along with my thoughts with everyone.

The full list of all the NetApp United 2018 members can be found here. Congratulations to all those who got selected and Thank you NetApp & @PeytStefanova for the invitation and the recognition!

Cheers

Chan