Category Archives: NetApp

All NetApp Contents

NetApp & Next Generation Storage Technologies

There are some exciting technology developments taking place in the storage industry, some behind closed doors but some that are also publicly announced and already commercially available that most of you may already have come across. Some of these are organic developments to build on existing technologies but some are inspired by megascalers like AWS, Azure, GCP and various other cloud platforms. I’ve been lucky enough to be briefed on some of these when I was at SFD12 last year I the Silicon Valley, by SNIA – The Storage and Networking Industry Association that I’ve previously blogged about here.

This time around, I was part of the Storage Filed Day (SFD15) delegate panel that got a chance to visit NetApp at their HQ at Sunnyvale, CA to find out more about some of exciting new product offerings that are in NetApp’s roadmap, either in the works or starting to just come out, incorporating some of these new storage technologies. This post aim to provide a summary of what I learnt there and my respective thoughts.

Introduction

It is no secret that Flash media has changed the dynamics of the storage market over the last decade due to their inherent performance characteristics. While the earliest incarnations of flash media were prohibitively expensive to be used in mass quantities, the invention of SSDs commoditised the use of flash media across the entire storage industry. For example, most tier 1 workloads in the enterprises today are held on a SSD backed storage system where SSD disk drives form the whole or a key part of the storage media stack.

When you look at some of the key storage solutions in use today, there are 2 key, existing flash technologies that stand out, DRAM & SSD. DRAM is the fastest possible flash storage media that is most easily accessible by the data processing compute subsystem while SSD’s fall in to next best place when it comes to speed of access and the level of performance (IOPS & bandwidth). As such, most enterprise storage solutions in the world, be that the ones aimed at the customer data centers or on the megascaler’s cloud platforms utilise one or both of these flash media types to either accelerate (caching) or simply store tier 1 data sets.

It is important to note that, while the SSD’s benefitted from the overall higher performance and lower latency compared to mechanical drives due to the internal architecture of the SSD disks themselves (flash storage cells that don’t require spinning magnetic media), both the SSD drives and classic mechanical (spinning) drives are typically attached & accessed by the compute subsystem via the same SATA or the SaS interface subsystem with the same interface speed & latency. Often the internal performance of an SSD was not fully realised to its maximum potential, especially in an aggregated scenario like that of an enterprise storage array, due to these interface controller access speed and latency limitations, as illustrated in the diagram below.

One of the more recent technology developments in the storage and compute industry, namely “Non-Volatile Memory Express” (NVMe) aims to address these SAS & SATA interface driven performance and the latency limitations through the introduction of new, high performance host controller interface that has been engineered from the ground up to be able to fully utilise flash storage drives. This new NVMe storage architecture is designed to be future proof and would be compatible with various future disk drive technologies that are NAND based as well as non-NAND based storage media.

NVMe SSD drives connected via these NVMe interfaces will not only outperform traditional SSD drives attached via SAS or SATA, but most importantly will enable higher future capabilities such as being able to utilise Remote Direct Memory Address (RDMA) for super high storage performance extending the storage subsystem over a fabric of interconnected storage and compute nodes. A good introduction to the NVMe technology and its benefits over SAS / SATA interfaces can be viewed here.

Another much talked about development on the same front is the subject of the Storage Class Memory (SCM) – Also known as Persistent Memory (PMEM). SCM is an organic successor to the NAND technology based SSD drives that we see in mainstream use in flash accelerated as well as all flash storage arrays today.

At a theoretical level, SCM can come in 2 main types as shown in the above diagram (from a really great IBM research paper published in 2013).

  • M-Type SCM (Synchronous) = Incorporate non-volatile memory based storage in to the memory access subsystem (DDR) rather than SCSI block based storage subsystem through PCIe, achieving DRAM like throughput and latency benefits for persistent storage. Typically take the form of NVDIMM (that is attached to the memory BUS, similar to traditional DRAM) which is the fastest and best performant thing, next to DRAM itself. Uses memory card slots and appear to the system to use as a caching layer or as pooled memory (extended DRAM space) depending on the NVDIMM type (NVDIMMs come in 3 types, NVDIMM-N, NVDIMM-F and NVDIMM-P. A good explanation available here).
  • S-Type SCM (Asynchronous) = Incorporate non-volatile memory based storage but attached via the PCIe connector to the storage subsystem. While this is theoretically slower than the above, it’s still significantly faster than NAND based SSD drives that are in common use today, including those attached via NVMe host controller interface. Intel and Samsung both have already launched S-type SCM drives, Intel with their 3D XPoint architecture and Samsung with Z-SSD respectively but current drive models available are aimed more at consumer / workstation rather than server workloads. Server based implementations of similar SCM drives will likely arrive around 2019. (Along with supported server based software included within operating systems such as Hypervisors – vSphere 7 anyone?)

The idea of the SCM is to address the latency and performance gap that exist in every computer system when it comes to memory and storage since the advent of X86 computing. Typically, access latency for DRAM is around 60ns, and the next best option today, NVMe SSD drives will have a typical latency of around 20-200us and the SCM will fit in between these 2, at a typical latency between 60ns-20uS, depending on the type of the SCM, with a significantly high bandwidth that is incomparable to SSD drives. It is important to note however that most ordinary workloads do not need this type of super latency sensitive, extremely high bandwidth storage performance, the next generation data technologies involving Artificial Intelligence techniques such as machine learning, real-time analytics that relies on processing extremely large swathes of data at super quick time, would absolutely benefit, and in most instance, necessitate the need for these next gen storage technologies to be fully effective.

NetApp’s NVMe & SCM vision

NetApp was one of the first classic storage vendors who incorporate flash in to their storage systems, in an efficient manner to accelerate the workloads that is typically stored on spinning disks. This started with the concept of NVRAM that was included in their flagship FAS storage solutions as an acceleration layer. Then came the flash cache (PAM cards) which were flash media attached via the PCIe subsystem to act as a cashing layer for reads which was also popular. Since the advent of all flash storage arrays, NetApp went another step by introducing all flash storage in to their portfolio through the likes of All Flash FAS platform that was engineered and tuned for all flash media as well as the EF series.

NetApp innovation and constant improvement process hasn’t stopped there. During SFD15 event, we were treated to the next step of this technology evolution by NetApp when they discussed how they plan to incorporate the above mentioned NVMe and SCM storage technologies in to their storage portfolio, in order to provide next gen storage capabilities to serve next gen use cases such as AI, big data and real-time analytics. Given below is a holistic roadmap plan of where NetApp see NVMe and SCM technologies fitting in to their roadmap, based on the characteristics, benefits and costs of each technology.

The planned use of NVMe is clearly in 2 different points of the host->storage array communication path.

  • NVMe SSD drives : NVMe SSD drives in a storage array, attached via NVMe host controller interface in order to be able to fully utilise the latency and throughput potential of the SSD drives themselves by the storage processor (in the controllers). This will provide additional performance characteristics to the existing arrays.
  • NVMe-OF : NVMe over fabric which is attached to the storage consumer nodes (Servers) via a ultra-low latency NVMe fabric. NVMe-OF enable the use of RDMA capabilities to reduce the distance between the IO generator and the IO processor thereby significantly reducing the latency. NVMe-OF therefore is widely touted to be the next big thing in storage industry and a number of specialists start-ups like Excelero have already come out to market with specialist solutions and you can find out more about it in my blog here. An example of the NVMe-OF storage solution available from NetApp is the new NetApp EF570 all flash array. This product is already shipping and more details can be found here or here. This platform offers some phenomenal performance numbers at ultra-low latency, built around their trusted, mature, feature rich, yet simple EF storage platform which is also a bonus.

The planned (or experimented) use of SCM is in 2 specific areas of the storage stack, driven primarily by the costs of the media vs the need for acceleration.

  • Storage controller side caching:        NetApp mentioned that some of the experiments they are working on with prototype solutions already built are looking at using SCM media on the storage controllers as a another tier to accelerate performance, in the same way PAM cards or Flash cache was used on the older FAS system. This a relatively straight forward upgrade and would be specially effective in an all flash FAS solution with SSD drives in the back end where a traditional flash cache card based on NAND cells would be less effective.
  • Server (IO generator) side caching:        This use case looks at using the SCM media on the host compute systems that generates the IO to act as a local cache, but most importantly, used in conjunction with the storage controllers rather than in isolation, performing tiering and snapshots from the host cache to a backend storage system like an All Flash FAS.
  • NetApp are experimenting on this front primarily using their recent acquisition of Plexistor and their proprietary software that performs the function of combining DRAM and SCM as a single address space that is byte addressable (via memory semantics which is much faster than scsi / NVMe addressable storage) and presenting that to the applications as a cache while also presenting the backend NetApp storage array such as an All Flash FAS as a persistent storage tier. The applications achieve significantly lower latency and ultra-high throughput this way through caching the hot data using the Plexistor file system which incidentally bypasses the complex Linux IO stack (Comparison below). The Plexistor tech is supposed to provide enterprise grade feature as a part of the same software stack though the specifics of what those enterprise grade features meant were lacking (Guessing the typical availability and management capabilities as natively available within OnTAP?)

Based on some of the initial performance benchmarks, the effect of this is significant, as can be seen below when compared to a normal

My thoughts

As an IT strategist and an Architect at heart with a specific interest in storage who can see super data (read “extremely large quantities of data”) processing becoming a common use case soon across most industries due to the introduction of big data, real-time analytics and the accompanying Machine Learning tech, I can see value in this strategy from NetApp. Most importantly, they are looking at using these advanced technologies in harmony with some the proven, tried and tested data management platforms they already have in the likes of OnTAP software could be a big bonus. The acquisition of Plexistor was a good move for NetApp and integrating their tech and having a shipping product would be super awesome if and when that happens but I would dare say the use cases would be somewhat limited prohibitive initially given the Linux dependency. Others are taking note and the HCI vendor Nutanix’s acquisition of PernixData kind of hints Nutanix also having a similar strategy to that of Plexistor and NetApp.

While the organic growth of current product portfolio with capabilities through incorporating new tech such as NVMe is fairly straight forward and help NetApp stay relevant, it remains to be seen however how well acquisition driven integration such as that of Plexistor with SCM technologies to the NetApp platform would pan out to become a shipping product. NetApp has historically had issues around the efficiency of this integration process which in the past has known to be slow but this time around, under the new CEO George Kurian who brought in a more agile software development methodology and therefore, a more frequent feature & update release cycle, things may well be different this time around. The evidence seen during SFD15 pretty much suggest the same to me which is great.

Slide credit to NetApp!

Thanks

Chan

NetApp United 2018 – No it’s not another football team!

I was glad to see an email from the NetApp united team this afternoon confirming that I’ve been selected as a member of the prestigious NetApp United (#NetAppUnited) team for 2018 which is a great honour indeed. Thanks NetApp!

Contrary to popular belief – NetApp United is NOT a football team but global community of individuals united by the passion for great technology. Similar to the VMware vExpert and Dell EMC elect programmes, NetAppUnited is a community programme run by NetApp (@PeytStefanova is the organiser in chief) to recognise global NetApp technology experts and community influencers with a view to giving them a platform to share more of their thoughts, contents, influence and ultimately share more of their expertise publicly though various community channels. Similar to the other community programs from other vendors, NetApp united is all about giving back to the community which is a good cause and I was happy to support.

Being recognised a member of the NetApp United program entitles you to a number of exclusive benefits such as dedicated NetApp technology update sessions with product engineers, exclusive briefings about future and upcoming NetApp solutions and products, Access to a private slack channel for the community members to discuss all things technical and related to NetApp and other exclusive events at NetApp Insight events in US and EMEA. All of these perks are nice to have indeed as they enable us to share some of those information with the others out there as well as provide our own thoughts which would be beneficial for current or future NetApp customers out there.

As I work for a global NetApp partner, I am looking forward to using the access to information I have as a part of this program to better leverage our partnership with NetApp as well as to educate our joint customers on future NetApp strategy. As I am also an independent contributor (outside of work), I intend to share some of the information (outside of NDA stuff) with my general audiences to help you understand various NetApp solutions, strategy and my independent thoughts on them which I think is important. I have been working with NetApp for a long time, initially as a customer and then as a partner where I’ve always been a great fan of their core strategy which was always about Software, despite being a HW product manufacturer. They have some extremely awesome innovation already available in their portfolio and even better innovation in the making for future (Have a look at the recently concluded #SFD15 presentation from them about the Data Pipeline vision here) and I am looking forward to sharing some of these along with my thoughts with everyone.

The full list of all the NetApp United 2018 members can be found here. Congratulations to all those who got selected and Thank you NetApp & @PeytStefanova for the invitation and the recognition!

Cheers

Chan

Storage Field Day 15 – Watch Live Here

Following on from my previous post about the vendor line-up and my plans during the event, this post is to share the exact vendor presentation schedule and some additional details.

Watch LIVE!

Below is the live streaming link to the event on the day if you’d like to join us LIVE. While the time difference might make it a little tricky for some, it is well worth taking part in as all the viewers will also have the chance to ask questions from the vendors live, similar to the delegates onset. Just do it, you won’t be disappointed!

Session Schedule

Given below is the session schedule throughout the event, starting from Wednesday the 7th. All times are in Pacific time (-8 hours from UK time)

Wednesday the 7th of March

    • 09:30 – 11:30 (5:30-7:30pm UK time) – WekaIO presents
    • 13:00 – 15:00 (9-11pm UK time) – IBM presents
    • 16:00 – 18:00 (12-2am 8th of March, UK time) Dropbox presents

Thursday the 8th of March

  • 08:00-10:00 (4-6pm UK time) – Hedvig presents from their Santa Clara offices
  • 10:30-12:30 (6:30-8:30pm UK time) NetApp presents from their Santa Clara offices
  • 13:30-15:30 (9:30-11:30pm UK time) – Western Digital/Tegile presents from Levi’s Stadium
  • 16:00-18:00 (12-2am 9th of March, UK time) – Datrium presents from Levi’s Stadium

Friday the 9th of March

  • 08:00-10:00 (4-6pm UK time) – StarWinds presents in the Seattle Room
  • 11:00-13:00 (7-9pm UK time) – Cohesity presents at their San Jose offices
  • 14:00-16:00 (10pm-12am UK time) – Huawei presents at their Santa Clara offices

FlexPod: The Joint Wonder From NetApp & Cisco (often with VMware vSphere on Top)

Logo

During attending the NetApp Insight 2015 in Berlin this week, I was reminded of the monumental growth in the number of customers who has been deploying FlexPods as their preferred converged solutions platform, which now celebrates its 5th year in operation. So I thought I’d do a very short post on it to give you my personal take of it and highlight some key materials.

FlexPod has been gaining lots of market traction as the preferred converged solution platform of choice for many customers of over the last 4 years. This has been due to the solid hardware technologies that underpins the solution offering (Cisco UCS compute + Cisco Nexus unified networking + NetApp FAS range of Clustered ONTAP SAN). Often, customers deploy FlexPod solutions together with VMware vSphere or MS Hyper-V on top (other hypervisors are also supported) which together, provide a complete, ready to go live, private and hybrid cloud platform that has been pre-validated to run most if not all typical enterprise data center workloads. I have been a strong advocate of FlexPod (simply due its technical superiority as a converged platform) for many of my customers since it’s inception.

Given below are some of the interesting FlexPod validated designs from Cisco & NetApp for Application performance, Cloud and automation, all in one place.

There are over 100+ FlexPod validated designs available in addition to the above, and they can all be found below

There is a certified, pre-validated, detailed FlexPod design and deployment guide for almost every datacentre workload and based on my 1st hand experience, FlexPod with VMware vSphere has always been a very popular choice amongst most customers as things just work together beautifully. Given the joint vendor support available, sourcing support from a single vendor for all tech in the solution is easy too. I also think customers prefer FlexPod over other similar converged solutions, say VBLOCK for example, due to the non prescriptive nature of FlexPod whereby you can tailor make a FlexPod solution that meet your need (a FlexPod partner can do this for a customer) which keeps the costs down too.

There are many FlexPod certified partners available who can size, design, sell and implement a FlexPod solution for a customer and my employer Insight is also one of them (in fact we were amongst the first few partners to get FlexPod partnership in the UK). So if you have any questions around the potential use of a FlexPod system, feel free to get in touch directly with me (contact details on the About Me section of this site) or through the Flexpod section of the Insight Direct UK web site.

Cheers

Chan

NetApp Integrated EVO:RAIL

NetApp has announced their version of the VMware EVO:RAIL offering – NetApp Integrated EVO:RAIL solution. So I thought I’d share with you some details if you are keen to find out a bit more.

First of all, VMware EVO:RAIL is one of the true hyper-converged infrastructure solutions available in the market today and I’d encourage you to read up a little more about it here first up if you are new to such hyper-converged solutions. A key element of this traditional VMware EVO:RAIL offering is that the underpinning storage is normally provided by VMware VSAN.  While there’s lot of good things and a great vibe in the industry about VSAN as a disruptive software defined storage technology with lots of potential, if you come from a traditional storage background where you understand the importance of specialist storage solutions (SAN) that’s built up their storage capabilities for years of work in the field (think EMC, NetApp, 3PAR, HDS), you may feel a little nervy about having to put your key application data on a relatively new storage technology like VSAN. So some of these storage vendors recognised this and added their storage tech to the same VMware EVO:RAIL offering, with a view to complement the  basic VMware EVO:RAIL offering. A list of those available can be found here (but please note that not all the vendors that appear here offer their own storage with VMware EVO:RAIL offering but simply the server hardware with VMware VSAN as the only storage option and its not very clear). NetApp integrated EVO:RAIL is NetApp’s version of this solution where, alongside VMware VSAN to storage temporary and less important data, a dedicated NetApp enterprise SAN cluster with all the NetApp innovation found within its Data ONTAP operating system is also made available to customers within this Evo:RAIL solution automatically. (EMC also announced something a little similar recently where they offer a VXPEX BLUE hyper converged appliance with VMware EVO:RAIL which you can read up about here. Until then, they only sold EVO:RAIL with just VMware VSAN rather than with a bundled EMC storage offering behind it so be careful if you are considering an Evo:RAIL offering from EMC).

Couple of background info points on the concept of hyper-converged infrastructures first,

  • Integrated / converged infrastructure market is and has been growing for many use cases of late. For example, FlexPod & VBLOCK have been massive successes and it is estimation is that 14,6% of the hardware market (server, storage & networking) is to be a part of an integrated infrastructure.
  • Hyper Converged infrastructure such as VMware Evo:RAIL is the next evolution of this naturally. Evo:RAIL can be classed as a true Hyper Converged solution compared to some other popular integration solutions (that uses a 3rd party hypervisor) such as Nutanix, Simplivity also often referred to as hyper-converged platforms.
  • It was estimated that the hyper-converged market was worth around $400-500 million for 2014
  • Amongst many use cases, Hyper Converged solutions are touted to be a good solution for the likes of branch offices…etc, where due to limited staff and infrastructure isolation requirements, simplicity of the solution setup and modular, self sufficient nature of the solution has been seen a good fit.
  • NetApp’s view seems to be that this (VMware EVO:RAIL) is very much a prescriptive solution that is not as scalable as a traditional infrastructure consisting of separate compute, storage & network nodes (i.e. FlexPod, VBLOCK) and its probably a view shared by the majority of the storage vendors.

Lets take a closer look at what the NetApp Integrated EVO:RAIL solution is and what its going to give you.

  • NetApp and VMware has had a long standing history of joint innovation together with more than 40,000 joint customers to date

1. History

  • NetApp Integrated EVO:RAIL provides a trusted storage platform vendor in to the existing VMware EVO:RAIL architecture and naturally only targeted at VMware customers.
  • Given below is the technical summary of the NetApp Integrated Evo:RAIL solution.
    • NetApp branded compute nodes (Co-branded with VMware)
      • Fixed server configuration similar to other competitive EVO:RAIL solutions.
      • 4 independent server nodes per NetApp server chassis
      • Dual Intel E5-2620v2 CPUs per server with 48 cores total per chassis
      • 192GB of RAM per server with 768GB of RAM total per chassis
      • Dual 10GbE NIC (optical or copper) SFP+ per server
      • NetApp fully provide all the server hardware support (actual OEM name is a secret) – This should not be too much of a concern to customers as a compute node is not massively different to their SAN controllers (both x86 systems) that they’ve been supporting for years.
    • NetApp Storage nodes
      • Comes with a NetApp FAS2552 high available SAN with Flash Pool (Flash pool is a way of NetApp using SSD disks in the shelves acting as a caching layer to optimize random reads and random overwrite workloads-typically seen in VDI, OLTP databases, Virtualisation. More info here.)
      • include Premium software bundle that include,
        • NetApp® Virtual Storage Console
        • NetApp NFS Plug-in for VMware VAAI
        • NetApp clustered Data ONTAP
        • NetApp Integration Software for VMware EVO:RAIL
        • NetApp FlexClone, SnapRestore, SnapMirror, SnapVault, Single Mailbox Recovery, SnapManager Suite
      • 12.6TB approximate NetApp usable capacity for enterprise data with SSD’s included for FlashPool (+6.5TB VSAN useable capacity)
      • Based on FAS2552 in a switchless cDOT cluster
      • Virtual SAN for vSphere infrastructure (as a base component to bring up the solution components up and running initially)
    • VMware Software Included
      • VMware EVO:RAIL software
      • VMware vCenter Server
      • VMware vSphere Enterprise Plus
      • VMware vRealize Log Insight
      • VMware Virtual SAN

Given below is the physical connectivity architecture of the NetApp integrated Evo:RAIL

2. Connectivity

  • The current offering has 2 types of storage:
    • VMware VSAN storage: Basic local server storage which is controlled by VSAN. Base application, SWAP space and temporary data can be placed here.
    • NetApp storage: Used for application deployment that require DR (NetApp SnapMirror…etc) and granular performance requirements (VST), Security and all traditional SAN requirements. For example, database servers like SQL, Oracle, and other applications like SAP, Sharepoint, Exchange as well as VDI that requires application integration for backup and recovery can have their data placed on the NetApp for the SnapManager application integration.
  • NetApp integrated Evo:RAIL also comes with the following benefit
    • NetApp Global Support providing,
      • Single contact for solution support
      • 3 years NetApp SupportEdge Premium Services for compute, storage, and NetApp and VMware software (note that NetApp specialise in this join support model already through the FlexPod support between NetApp, Cisco and VMware which they are presumably leveraging here)3 year hardware warranty (NetApp storage and server hardware)
      • Onsite Next Business Day and Same Day 4 hour parts replacement
  • Simple Deployment
    • Additional EVO:RAIL configuration engine integration software from NetApp (click and launch from the EVO:RAIL home page) is aimed to simplify the deployment of the NetApp storage as a part of the Evo:RAIL deployment.
    • Key points to note here are,
      • Simple setup and configuration & NetApp best practices automatically applied
      • Unified management across virtual and storage environment using vCenter Web Client with integrated NetApp Virtual Storage Console
      • Deep application integration: Exchange, SQL Server, SharePoint, Oracle and SAP
    • Overall deployment takes around 11 minutes approx. for the EVO:RAIL + about 5 mins for the NetApp SAN
    • A NetApp automation VM (called NTP-QEP) is deployed as a part of the initial deployment configuration automatically which acts as the glue between the EVO:RAIL management software and the NetApp hardware (I wonder if we can get this appliance with an API access so we can point this as a standalone NetApp?? That would be pretty awesome now wouldn’t it??)

4. Demo 1

    • The current prototype version of the integration software through this VM can be accessed when you login to the EVO:RAIL management console via the NetApp icon on the left and once launched, will take you to a simple data collection screen that asks for vCenter credentials, storage system pwd, management & data network details and the license details for the NetApp. Once they are provided and submitted, the automation engine will go ahead and configure the whole NetApp cDOT cluster including VSC VM deployed, cluster instantiated, node manage LIFS created, SP configured FP configured, SVM, FlexVol created & datastores are mounted to VMware for use based on NetApp best practise all automatically. Things like deduplication is also automatically enabled.
    • Since the NetApp Virtual Storage Console plugin is automatically installed, you can easily configure any additional NetApp configurations through that afterwards if you really wants.
  • Current planned use cases
    • Mainly aimed at branch offices as a solution
    • Also recommended as a point solutions aimed at achieving compliance and application integration such as database system deployments with built in backup and DR
    • Also positioned for VDI deployments (due to the built in flash option and the ease of deployment) with integrated backup and DR
  • Ordering & Availability
    • All components are available as a single product with 2 SKU’s, a product SKU and a support SKU. That’s it and include all NetApp and VMware software components in the SKU.
    • Targeted availability for ordering is somewhere around Q1/Q2 this year (2015)

Sounds like an interesting proposition from NetApp and I can see value, especially if you are an existing NetApp customer who knows and are used to all the handy tools available to you from the storage layer whos looking at VMware EVO:RAIL for a point solution or a branch office solution, this would be a simple no brainer.

Cheers

Slide credit goes to NetApp..!

Chan

 

NetApp Lanamark (HPAS)

If you are a NetApp presales SE working for NetApp or a reseller, or simply an IT consultant working onsite (customer) trying to procure & deploy a NetApp enterprise SAN storage system as a part of an IT solution, one of the hardest things you’d have to do is to figure out how big you’d need to make the SAN in order for it to last a decent while without having to upgrade it to something bigger and better few months down the line.  An accurate sizing is an extremely important part that you must pay enough attention to upfront to scientifically finalise the size & the specification of your new SAN, based on the workload your going to put on it. This is simply so that you procure a SAN storage that’s fit for purpose from capacity & throughput perspective so you wont have to look at adding additional nodes prematurely to your storage cluster too early in its life cycle.

Being involved as a channel SE, I go through this process day in and day out with my customers and the hardest part that usually gets in the way of doing an accurate sizing (apart from the impatient customer and the even more impatient sales guy who prefers to use the art of guessing to quickly come up with a “SAN configs that he can quote quickly for the customer”) is the lack of readily available storage statistics of the existing environment or the inability to effectively gather these storage stats from a distributed IT environment without laborious & time consuming tasks that involve lot of work with spreadsheets (I can imagine all NetApp SE’s nodding their heads in agreement right now 🙂 ). So far, a typical NetApp / Channel partner SE proposing a NetApp SAN solution for a customer, who’s supposed to be doing it properly would have had to ask the customer to provide storage stats for all of their estate (which almost always does not exist) or deploy a rudimentary monitoring and statistics gathering tool like virtualisation data collector (you need to be a NetApp partner with appropriate access to click on the link) and do lot of manual spreadsheet related work to manipulate the rudimentary data gathered before it can be fed in to SPM (NetApp’s official sizing tool) to produce a valid SAN storage configs for the given requirement. Having done this repeatedly myself for every single storage solution for my customers, I can say it has been painful and often very time consuming.

Like me, if you are involved in lot of NetApp SAN storage sizing for various customers,  you’d be really glad to know that there’s a new data collection tool been made available called NetApp Lanamark (HPAS). NetApp Lanamark is a lightweight, agentless data collector that is fundamentally very similar to how VMware capacity planner (and its data collector) works. NetApp Lanamark lets you deploy a single data collector on to a Windows server (could be a VM) and monitor and continuously collect resource utilisation statistics (mainly storage) which is uploaded to a central repository online. Once sufficient data has been collected, you can run an assessment (unlike VMware cap planner assessment which is little complicated to actually get setup initially and configure an assessment, all of that are automatically done for you, so all you really need to do is to create groups of servers if required) and export the results in the form of a JSON file that can directly be imported in to SPM (NetApp’s formal sizing tool) and voila… you have an accurately sized SAN configuration that can be sent to the customer….Its that simple.

Given below are some key points to note

  • It is supported to collect performance stats from a variety of host system OS’s such as (collecting data from none host systems such as arrays is not supported),
    • Most Windows versions
    • Following Linux versions (CentOS 3.x, 4.x, 5.x, 6.x, Debian 3.1, 4.0, 5.0, Fedora 4 – 10, Novell SUSE Linux Enterprise 9.x, 10.x, openSUSE 10.x, 11.x, Oracle Linux 4, 5, Red Hat Enterprise Linux 3.x, 4.x, 5.x, 6.x, Ubuntu 6 and up)
    • Though its officially not listed, when I trialled this, it successfully connected to my ESXi (5.5) servers and collected data from them too. But note that later on when you run the assessment, the stats for these servers seems to get omitted (when I trialled this in my lab, it seemed to omit all the stats for the ESXi hosts)
  • Each collector can gather data from <=2500 servers / systems
  • Stats collected are uploaded to a data warehouse server in US
  • Only available to NetApp employees or Start and Platinum partner SE’s (everyone else, you’d have to ask your NetApp SE to do this on your behalf)
  • If you are a NetApp SE or a partner SE, more details can be found on the live presentation that can be found here

I’ve attempted a simple assessment using my home lab and given below are the typical setup steps involved.

  • Go to https://lanamark.netapp.com and register a new opportunity (need a NetApp NOW account to login with. NetApp SE’s, Distribution SEs, Platinum or Start partner SE’s have rights to use this. If you are a different partner level, you need to request the aligned NetApp SE to do this on your behalf. Your NetApp CDM or the sales rep involved from NetApp in the opportunity also need to authorise the assessment. Anyone with a @netapp.com email address could usually do this)

Login screen

  • Once this is done, the designated customer contact receives and email with a link to download the data collector install with all the customer specific information is hard coded (no need to register collector ID’s with the online repository unlike the VMware cap planner for example). If you prefer to do this your self, you can too as the NetApp / Partner SE.
  • Once the collector is installed, you configure the collector with the list of servers and the appropriate credentials for each server being monitored as illustrated in the screenshot below.

Collector Screen

  • Once the systems have been fed in to the collector (in the form of an IP range, csv file….etc) and relevant credentials have been associated, it will automatically inventory each server and start collecting statistics which are uploaded once a day by default to the NetApp Lanamark Central online repository which you can view as the SE via http://lanamark.netapp.com

Landing Page

 

  • Double click on the assessment / engagement to view the details.

Screen 1

  •  Data feeds tab shows all the hosts that are being monitored and their monitored status

Data feeds tool

  • The Assessment tab shows the summary of all the data collected that are ready to be imported in to SPM (NetApp System Performance Modeller = Official NetApp Sizing tool for all FAS and E series arrays).

Assessment screen

Assessment screen 2

 

  • On the top right hand corner of the assessment window, you have an option to generate a summary report which will produce a docx (Microsoft Word) document with all statistics data pre populated. This is quite handy if you are a vendor / partner SE who wants to present the findings in a formal manner through a proposal….etc.

Summary Report

  • The SPM export button create a .JSON file (sample shown below) which you can directly import in to SPM during sizing (no more laborious spreadsheet jobby’s 🙂 )

JSON

 

SPM 1

SPM 2

That’s, it…. Its that simple…..!!

Would be good to hear from others who’s already been using it out in the field to sizing new NetApp systems (comments are welcome..!)

Thanks to Bobby Hyam (NetApp SE)  & Craig Menzies (EMEA Partner manager) from NetApp for providing the links to the presentation & Info….!!

Cheers

Chan