Tag Archives: Cloud

Introduction To VMware App Defense – Application Security as a Service

Yesterday at VMworld 2017 US, VMware annouced the launch of AppDefense. This post is a quick introduction to look a little closely at what it is & my initial thoughts on it.

AppDefense – What is it?

AppDefense is a solution that uses the Hypervisor to introspect the guest VM application behaviour. It involves analysing the applicaiton (within guest VM) behaviourestablishing its normaly operational behaviour (intended state) & once verified to be the accurate, constantly measuring the future state of those applications against the intended state & least privilege posture and controlling / remediating its behaviour should non-conformance is detected. The aim is increase application security to detect infiltrations at the application layer and automatically prevent propogation of those infiltrations untill remediation.

AppDefense is a cloud hosted managed solution (SaaS) from VMware that is hosted on AWS (https://appdefense.vmware.com) that is managed by VMware rather than an onpremises based monitoring & management solution. It is a key part of the SaaS solution stack VMware also announced yesterday, VMware Cloud Services. (A separate detailed post to follow about VMware Cloud Services)

If you know VMware NSX, you know that NSX will provide least privillege execution environment to prevent attacks or propogation of security attacks through enforcing least privillege at the network level (Micro-Segmentation). AppDefense adds an additional layer by enforcing the same least privillege model to the actual application layer as well within the VM’s guest OS.

AppDefense – How does it work?

The high level stages employed by AppDefense in identifying and providing application security consist of the following high level steps (based on what I understand as of now).

  1. Application base lining (Intended State):  Automatically identifying the normal behavious of an application and producing a baseline for the application based on its “normal” behavioural patters (Intended state).                                                    This intended state can come from analyzing normal, un-infected application behaviour within the guest or even from external application state definition platforms such as Puppet…etc. Pretty cool that is I think!  
  2. Detection:  It will then constantly monitor the application behaviour against this baseline to see if there are any deviations which could amont to potential malicious behaviuours. If any are detected, AppDefense will either block those alien application activities or automatically isolate the application using the Hypervisor constructs, in a similar manner to how NSX & 3rd party AV tools auto isolate guest introspection using heuristic analysis. AppDefense uses an in-memory process anomaly detector rather than taking a hash of the VM file set (which is often how 3rd party security vendors work) which is going to be a unique selling point, in comparison to typical AV tools. An example demo showed by VMware was on an application server that ordinarily talks to a DB server using a SQl server ODBC connectivity, where once protected by AppDefense, it automaticlaly blocks any other form of direct connectivity from that app server to the DB server (say a Powershell query or a script running on the app server for example) even if that happened to be on the same port that is already permitted. – That was pretty cool if you ask me.  
  3. Automated remediation:  Similar to above, it can then take remediation action to automatically prevent propogation.

 

AppDefense Architecture

AppDefense, despite being a SaaS application, will work with cloud (VMware Cloud on AWS) as well as on-premises enviornment. The onpremises proxy appliance will act as the broker. Future road map items will include extending capabilities to non vSphere as well as bare metal workloads onpremises. There will be an agent that is deployed in to the VM’s (guest agent) that will run inside a secure memory space to ensure it’s authenticity.

For the on-premis version, vCenter is the only mandatory pre-req whereas NSX mgr and vRA are optional and only required for remediation and provisioning. (No current plans for Security Manager to be available onsite, yet).

AppDefense Integration with 3rd parties*

  • IBM Security:
    • AppDefense plans to integrate with IBM’s QRadar security analytics platform, enabling security teams to understand and respond to advanced and insider threats that cut across both on-premises and cloud environments like IBM Cloud. IBM Security and VMware will collaborate to build this integrated offering as an app delivered via the IBM Security App Exchange, providing mutual customers with greater visibility and control across virtualized workloads without having to switch between disparate security tools, helping organizations secure their critical data and remain compliant.
  • RSA:
    • RSA NetWitness Suite will be interoperable with AppDefense, leveraging it for deeper application context within an enterprise’s virtual datacenter, response automation/orchestration, and visibility into application attacks. RSA NetWitness Endpoint will be interoperable with AppDefense to inspect unique processes for suspicious behaviors and enable either a Security Analyst or AppDefense Administrators to block malicious behaviors before they can impact the broader datacenter.
  • Carbon Black:
    • AppDefense will leverage Carbon Black reputation feeds to help secure virtual environments. Using Carbon Black’s reputation classification, security teams can triage alerts faster by automatically determining which behaviors require additional verification and which behaviors can be pre-approved. Reputation data will also allow for auto-updates to the manifest when upgrading software to drastically reduce the number of false positives that can be common in whitelisting.
  • SecureWorks:
    • SecureWorks is developing a new solution that leverages AppDefense. The new solution will be part of the SecureWorks Cloud Guardian™ portfolio and will deliver security detection, validation, and response capabilities across a client’s virtual environment. This solution will leverage SecureWorks’ global Threat Intelligence, and will enable organizations to hand off the challenge of developing, tuning and enforcing the security policies that protect their virtual environments to a team of experts with nearly two decades of experience in managed services.
  • Puppet:
    • Puppet Enterprise is integrated with AppDefense, providing visibility and insight into the desired configuration of VMs, assisting in distinguishing between authorized changes and malicious behavior

*Credit: VMware AppDefense release news

Having spoken to the product managers, my guess is these partnerships will grow as the product goes through its evolution to include many more security vendors.

 

Comparison to competition

In comparison to other 3rd party AV tools that have heuristic analysis tools that does similar anomaly detection within the guests, VMware AppDefense is supposed to have a number of unique selling points such as the ability to better understand distributed application behaviours than competition to reduce false positives, the ability to not jut detect but also take remediation orchesatration capabilities (through the use of vRA and NSX) as well as the near future roadmap to use Machine learning capabilities to enhance anomaly detection within the guest which is pretty cool.

Understanding the “Intended state”

Inteded state can come from various information collected from various data center state definition tools such as vCenter, Puppet, vRealize Automation & othr configuraoin management solutions as well as devlopper workflows such as Ansible, Jenkins…etc.

App Defense agent (runs in the guest OS) runs in a protected memory space within the guest (via the hypervisor) to store the security controls that is tamper proof (secure runtime). Any attempts to intrude in to this space are detected and actioned upon automatically. While this is secure, it’s not guranteed at the HW layer (Think HyTrust that uses Intel CPU capabilities such as TXT to achieve HW root of trust), though I suspect this will inevitably come down the line.

 

AppDefense – My (initial) Thoughts

I like the sound of it and its capabilities based on what I’ve seen today. Obviously its a SaaS based application and some people may not like that to monitor and enforce your security, especially if you have an on-premises environment that you’d like to monitor and manage security on, but if you can get over that mindset, this could be potentially quite good. But obviously if you use VMware Cloud Services, especially VMware Cloud on AWS for example, this would have direct integration with that platform to enforce application level security which could be quite handy. As with all products however, the devil is normally in the detail and the this version has only just been released so the details available is quite scarse in order to form a detailed & an accurate opinion. I will be aiming to test this out in detail in the coming future, both with VMware cloud on AWS as well as On-Premises VMware SDDC stack and provide some detailed insights. Furthermore, its a version 1.0 product and realistically, most production customers will likely wait until its battle hardened and becomes richer with capabilities such as using Hardware root of trust capabilities are added before using this for key production workloads.

However until then, its great to see VMware are focusing more on security in general and building in native, differentiated security capabilities focusing on the application layer which is equally important as the security at the infrastructure layer. I’m sure the product will evolve to incorporate things such as AI & machine learning to provide more sophisticated preventive measures in the future. The ability to taken static applicatio / VM state definitions from external platforms like Puppet is really useful and I suspect would probably be where this would be popular with customers, at least initially.

Slide credits go to VMware.!

Cheers

Chan

VMware vSAN 6.6 Release – Whats New

VMware has just annouced the general availability of the latest version of vSAN which is the backbone of their native Hyper Converged Infrastructure offering with vSphere. vSAN has had a number of significant upgrades since its very first launch back in 2014 as version 5.5 (with vSphere 5.5) and each upgrade has added some very cool, innovative features to the solution which has driven the customer adoption of vSAN significantly. The latest version vSAN 6.6 is no different and by far it appears to be have the highest number of new features announced during an upgrade release.

Given below is a simple list of some of the key features of vSAN 6.6 which is the 6th generation of the products

Additional native security features

  • HW independent data at rest encryption (Software Defined Encryption)
    • Software Defined AES 256 encryption
    • Supported on all flash and hybrid
    • Data written already encrypted
    • KMS works with 3rd party KMS systems
  • Built-in compliance with dual factor authentication (RSA secure ID and Smart-card authentication)

Stretched clusters with local failure protection

With vSAN 6.6, if a site fails, surviving site will have local host and disk group protection still (not the case with the previous versions)

  • RAID 1 over RAID 1/5/6 is supported on All Flash vSAN only.
  • RAID 1 over RAID 1 is supported on Hybrid vSAN only

Proactive cloud analytics

This sounds like its kind of similar to Nimble’s cloud analytics platform which is popular with customers. With proactive cloud analytics, it uses data collected from VSAN support data globally to provide analytics through the vSAN health UI, along with some performance optimization advice for resolving performance issues.

Intelligent & Simpler operations

Simpler setup and post set up operations are achieved through a number of new features and capabilities. Some of the key features include,

  • Automated setup with 1 click installer & lifecycle management
  • Automated configuration & compliance checks for vSAN cluster (this was somewhat already available through vSAN health UI). Additions include,
    • Networking & cluster configurations assistance
    • New health checks for encryption, networking, iSCSI, re-sync operations
  • Automated controller firmware & driver upgrades
    • This automates the download and install of VMware supported drivers for various hard drives and RAID controllers (for the entire cluster) which is significantly important.
    • I think this is pretty key as the number of vSAN performance issues due to firmware mismatch (especially on Dell server HW) has been an issue for a while now.
  • Proactive data evacuation from failing drives
  • Rapid recovery with smart, efficient rebuild
  • Expanded Automation through vSAN SDK and PowerCLI

High availability

vSAN 6.6 now includes a highly available control plane which means the resilient management is now possible independent of vCenter.

Other key features

  • Increased performance
    • Optimized for latest flash technologies involving 1.6TB flash (Intel Optane drives anyone??)
    • Optimize performance with actionable insights
    • 30% faster sequential write performance
    • Optimized checksum and dedupe for flash
  • Certified file service and data protection (through 3rd party partners)
  • Native vRealize Operations integrations
  • Simple networking with Unicast
  • Real time support notification and recommendations
  • Simple vCenter install and upgrade
  • Support for Photon 1.1
  • Expanded caching tier choices

There you go. Another key set of features added to vSAN with the 6.6 upgrade which is great to see. If you are a VMware vSphere customer who’s looking at a storage refresh for your vSphere cluster or a new vSphere / Photon / VIC requirement, it would be silly not to look in to vSAN as opposed to looking at legacy hardware SAN technologies from a legacy vendor (unless you have non VMware requirements in the data center).

If you have any questions or thoughts, please feel free to comment / reach out

Additional details of whats new with VMware vSAN 6.6 is avaiable at https://blogs.vmware.com/virtualblocks/2017/04/11/whats-new-vmware-vsan-6-6/

Cheers

Chan

 

Impact from Public Cloud on the storage industry – An insight from SNIA at #SFD12

As a part of the recently concluded Storage Field Day 12 (#SFD12), we traveled to one of the Intel campuses in San Jose to listen to the Intel Storage software team about future of storage from an Intel perspective (You can read all about here).  While this was great, just before that session, we were treated to another similarly interesting session by SNIA – The Storage Networking Industry Association and I wanted to brief everyone on what I learnt from them during that session which I thought was very relevant to everyone who has a vested interest in field of IT today.

The presenters were Michael Oros, Executive Director at SNIA along with Mark Carlson who co-chairs the SNIA technical council.

Introduction to SNIA

SNIA is a non-profit organisation that was formed 20 years ago to deal with inter-operability challenges of network storage by various different tech vendors. Today there are over 160 active member organisations (tech vendors) who work together behind closed doors to set standards and improve inter-operability of their often competing tech solutions out in the real world. The alphabetical list of all SNIA members are available here and the list include key network and storage vendors such as Cisco, Broadcom, Brocade, Dell, Hitachi, HPe, IBM, Intel, Microsoft, NetApp, Samsung & VMware. Effectively, anyone using any and most of the enterprise datacenter technologies have likely benefited from SNIA defined industry standards and inter-operability

Some of the existing storage related initiatives SNIA are working on include the followings.

 

 

Hyperscaler (Public Cloud) Storage Platforms

According to SNIA, Public cloud platforms, AKA Hyperscalers such as AWS, Azure, Facebook, Google, Alibaba…etc are now starting to make an impact on how disk drives are being designed and manufactured, given their large consumption of storage drives and therefore the vast buying power. In order to understand the impact of this on the rest of the storage industry, lets clarify few key basic points first on these Hyperscaler cloud platforms first (for those didn’t know)

  • Public Cloud providers DO NOT buy enterprise hardware components like the average enterprise customer
    • They DO NOT buy enterprise storage systems (Sales people please read “no EMC, no NetApp, No HPe 3par…etc.”)
    • They DO NOT buy enterprise networking gear (Sales people please read “no Cisco switches, no Brocade switches, HPe switches…etc”.)
    • They DO NOT buy enterprise servers from server manufacturers (Sales people please read “no HPe/Dell/Cisco UCS servers…etc.)
  • They build most things in-house
    • Often this include servers, network switches…etc
  • They DO bulk buy disk drives direct from the disk manufacturers & uses home grown Software Defined Storage techniques to provision that storage.

Now if you think about it, large enterprise storage vendors like Dell and NetApp who normally bulk buy disk drives from manufacturers such as Samsung, Hitachi, Seagate…etc would have had a certain level of influence over how their drives are made given the economies of scale (bulk purchasing power) they had. However now, Public cloud providers who also bulk buy, often quantities far bigger than those said storage vendors would have, also become hugely influential over how these drives are made, to the level that their influence is exceeding that of those legacy storage vendors. This influence is growing such that they (Public Cloud providers) are now having a direct input towards the initial design of the said components (i.e disk drives…etc.) and how they are manufactured, simply due to the enormous bulk purchasing power as well as the ability they have to test drive performance at a scale that was not even possible by the drive manufacturers before,  given their global data center footprint.

 

Expanding on the focus these guys have on Software Defined storage technologies to aggregate all these disparate disk drives found in their servers in the data center is inevitably leading to various architectural changes in how the disk drives are required to be made going forward. For example, most legacy enterprise storage arrays would rely on the old RAID technology to rebuild data during drive failures and there are various background tasks implemented in the disk drive firmware such as ECC & IO re-try operations during failures which adds to the overall latency of the drive. However with modern SDS technologies (in use within Public Cloud platforms as well as some new enterprise SDS vendors tech), there are multiple copies of data held on multiple drives automatically as a part of the Software Defined Architecture (i.e. Erasure Coding) which means those specific background tasks on disk drives such as ECC, and re-try mechanism’s are no longer required.

For example, SNIA highlighted Eric Brewer, the VP of infrastructure of Google who talked about the key metrics for a modern disk drive to be,

  • IOPS
  • Capacity
  • Lower tail latency (long tail of latencies on a drive, arguably caused due to various background tasks, typically causes a 2-10x slower response time from a disk in a RAID group which causes a disk & SSD based RAID stripes to experience at least a single slow drive 1.5%-2.2% of the time)
  • Security
  • Lower TCO

So in a nutshell, Public cloud platform providers are now mandating various storage standards that disk drive manufacturers have to factor in to their drive design such that the drives are engineered from ground up to work with Software Defined architecture in use at these cloud provider platforms.  What this means most native disk firmware operations are now made redundant and instead the drive manufacturer provides an API’s through which cloud platform provider’s own software logic will control those background operations themselves based on their software defined storage architecture.

Some of the key results of this approach includes following architectural designs for Public Cloud storage drives,

  • Higher layer software handles data availability and is resilient to component failure so the drive firmware itself doesn’t have to.
    • Reduces latency
  • Custom Data Center monitoring (telemetry), and management (configuration) software monitors the hardware and software health of the storage infrastructure so the drive firmware doesn’t have to
    • The Data Center monitoring software may detect these slow drives and mark them as failed (ref Microsoft Azure) to eliminate the latency issue.
    • The Software Defined Storage then automatically finds new places to replicate the data and protection information that was on that drive
  • Primary model has been Direct Attached Storage (DAS) with CPU (memory, I/O) sized to the servicing needs of however many drives of what type can fit in a rack’s tray (or two) – See the OCP Honey Badger
  • With the advent of higher speed interfaces (PCI NVMe) SSDs are moving off of the motherboard onto an extended PCIe bus shared with multiple hosts and JBOF enclosure trays – See the OCP Lightning proposal
  • Remove the drives ability to schedule advanced background operations such as Garbage collection, Scrubbing, Remapping, Cache flushes, continuous self tests…etc on its own and allow the host to affect the scheduling of these latency increasing drive maintenance operations when it sees fit – effectively remove the drive control plane and move it up to the control of the Public Cloud platform (SAS = SBC-4 background Operation Control, SATA = ACS-4 advanced background operaitons feature set, NVMe = Available through NVMe sets)
    • Reduces unpredictable latency fluctuations & tail latency

The result of all these means Public Cloud platform providers such as Microsoft, Google, Amazon are now also involved at setting industry standards through organisations such as SNIA, a task previously only done by hardware manufacturers. An example is the DePop standard which is now approved at T13 which essentially defines a standard where the storage host will shrink the usable size of the drive by removing the poor performing (slow) physical elements such as drive sectors from the LBA address space rather than disk firmware. The most interesting part is that the drive manufacturers are now required to replace the drives when enough usable space has shrunk to match the capacity of a full drive, without necessarily having the old drive back (i.e. Public cloud providers only pay for usable capacity and any unusable capacity is replenished with new drives) which is a totally different operational and a commercial model to that of legacy storage vendors who consume drives from drive manufacturers.

Another concept that’s pioneered by the Public cloud providers is called Streams which maps lower level drive blocks with an upper level object such as a file that reside on it, in a way that all the blocks making the file object are stored contiguously. This simplifies the effect of a TRIM or a SCSI UNMAP command (executed when the file is deleted from the file system) which reduces delete penalty and causes lowest amount of damage to SSD drives, extending their durability.

 

Future proposals from Public Cloud platforms

SNIA also mentioned about future focus areas from these public cloud providers such as,

  • Hyperscalers (Google, Microsoft Azure, Facebook, Amazon) are trying to get SSD vendors to expose more information about internal organization of the drives
    • Goal to have 200 µs Read response and 99.9999% guarantee for NAND devices
  • I/O Determinism means the host can control order of writes and reads to the device to get predictable response times –
    • Window of reading – deterministic responses
    • Window of writing and background – non-deterministic responses
  • The birth of ODM – Original Design Manufacturers
    • There is a new category of storage vendors called Original Design Manufacturer (ODM) direct who package up best in class commodity storage devices into racks according to the customer specifications and who operate at much lower margins.
    • They may leverage hardware/software designs from the Open Compute Project (OCP) or a similar effort in China called Scorpio, now under an organization called the Open Data Center Committee (ODCC), as well from as other available hardware/software designs.
    • SNIA also mentioned about few examples of some large  global enterprise organisations such as a large bank taking the approach of using ODM’s to build a custom storage platform achieving over 50% cost savings over using traditional enterprise storage

 

My Thoughts

All of these Public Cloud platform introduced changes are set to collectively change the rest of the storage industry too and how they fundamentally operate which I believe would be good for the end customers. Public cloud providers are often software vendors who approaches every solution with a software centric solution and typically, would have highly cost efficient architecture of using cheapest commodity hardware with underpinned by intelligent software. This will likely re-shape the legacy storage industry too and we are already starting to see the early signs of this today through the sudden growth of enterprise focused Software Defined Storage vendors and legacy storage vendors struggling with their storage revenue. All public cloud computing and storage platforms are a continuous evolution for the cost efficiency and each of their innovation in how storage is designed, built & consumed will trickle down to the enterprise data centers in some shape or form to increase overall efficiencies which surely is only a good thing, at least in my view. And smart enterprise storage vendors that are software focused, will take note of such trends and adopt accordingly (i.e. SNIA mentioned that NetApp for example, implemented the Stream commands on the front end of their arrays to increase the life of the SSD media), where as legacy storage / hardware vendors who are effectively still hugging their tin, will likely find the future survival more than challenging.

Also, the concept of ODM’s really interest me and I can see the use of ODM’s increasing further as more and more customers will wake up to the fact that they have been overpaying for their storage for the past 10-20 years in the data center due to the historically exclusive capabilities within the storage industry. With more of a focus on a Software Defined approach, there are large cost savings to be had potentially through following the ODM approach, especially if you are an organisation of that size that would benefit from the substantial cost savings.

I would be glad to get your thoughts, through comments below

 

If you are interested in the full SNIA session, a recording of the video stream us available here and I’d recommend you watch it, especially if you are in the storage industry.

 

Thanks

Chan

P.S. Slide credit goes to SNIA and TFD