3PAR storage class memory Intel Optane

Discover Storage News Wrap Up

Last week at HPE Discover there was a series of storage announcements for the 3PAR and Nimble platform plus more. In this post we will run through the announcements.

Storage Class Memory

3PAR storage class memory Intel Optane

Storage class memory has been announced for 3PAR and Nimble. Storage Class Memory memory (SCM) was intended to bridge the gap between DRAM and NAND flash found in current generation SSD’s. DRAM is fast memory but does not scale and is expensive, NAND flash does scale but does not deliver the same performance. Intel have developed their Optane product which sits in this middle ground of performance and cost between SSD and NAND flash.

The Intel Optane device will be NVMe connected and act as an extension to the onboard controller cache. Other vendors such as Pure have added NVMe drives into the system but HPE have focused on adding the NVMe storage within the controller as they believe that the key bottleneck in flash systems is at the controller level and not disk. The additional cache creates quicker response times and also reduces the  load on back end disks as the amount of IO served served here is reduced. Phillip Sellers has written an in-depth piece looking at the addition of SCM to Nimble and 3PAR.

InfoSight

There was a number of announcements around the InfoSight platform.  First up was 3PAR performance insights which allows a performance view of not only your storage but also the VMware layer. We’ve discussed this in detail previously including how to implement it, but in short this is free for customers and allows deep insights into your VMware environment such as being able to find VMs with the highest resource utilisation.

The Nimble storage platform also benefited from a couple of InfoSight enhancements. Nimble Resource planner allows load modelling. As a storage admin I really like the sound of this as so often we don’t know what the impact of an additional workload will be until it’s deployed. The modelling software predicts the impact both in terms of performance and capacity, below is an example of running through this

Choosing workload to model

Nimble resource planner1

Impact on capacity is modelled 

Nimble resource planner 2

Also modelled is the impact on CPU and cache

Nimble resource planner 4

Nimble cross stack analytics brings enhancements to the performance recommendations already delivered by InfoSight. The focus of this new feature is bring specific AI driven recommendations to the environment. An example of the type of recommendation is shown in the screenshot below

Nimble cross stack analytics

The InfoSight product is still owned by the storage segment of HPE but its expansion into other products is continuing at a pace. One of HPE’s key objective is to deliver a cloud like experience and make things consumable as services. InfoSight is a key part of this strategy ensuring uptime and proactive recommendations to the environment to eliminate issues before they occur.

You can see a summary of the InfoSight announcements in this chalk talk

Nimble Peer Persistence

Synchronous replication was added as a feature to the Nimble storage platform. The feature is being called Peer Persistence since HPE are also introducing support for metro clusters at the same time. I have covered Peer Persistence in-depth previously for 3PAR, it essentially allows for the creation of stretched cluster across geographically separate data centres. The Nimble implementation of Peer Persistence will initially support metro clusters for SQL and VMware.

Cloud Volumes Enhancements

Nimble Cloud Volumes were announced a couple of years ago. These cloud volumes offer block storage with the maturity and depth of services you would expect to see on premise for cloud. Announced this time round was the support of containerized workloads using both Docker and Kubernetes.  Also announced was extended regional support so cloud volumes will be available for the UK and Ireland in 2019.

Cohesity

HPE have also enhanced their relationship with secondary data platform Cohesity. Cohesity is available bundled with Proliant Gen10 servers through HPE partners.  These validated Cohesity and HPE bundles have added support for HPE Apollo and HPE DL380 servers

You can see a further summary of announcements in this blog post from Calvin Zito

 

Tiering Isn’t Dead

With advent of flash came a whole host of new storage vendors, with many claiming a new from the ground up architecture was required to get the full benefits of flash. This meant for many of the new players they chose to go down the all flash route, which allowed a strong disk is dead marketing message. Going all flash also allowed the system to be recognised in the Gartner all flash magic quadrant, which has as one of it’s qualifying criteria that no spinning disk must be able to be added to the system.  The net effect of this was that the vendors who went down a pure flash route had no need for any kind of tiering technology, because there was only flash and that’s all you need right?

 

But in technology what goes around comes around and during the course of Storage Field Day 9 we saw several vendors talking about tiering. But this was not the traditional tiering we are used to but using new methods and storage mediums.

 

 

Tiering for performance

 

Potentially the most unique product we heard about during SFD9 which offered tiering was Plexistor. Plexistor are a start-up in the relatively early stages of business, currently with 18 employees. The company produces what it describes as a software defined memory solution capable of providing high capacity storage at near memory speed. Plexistor’s solution is a POSIX file system that can work with most of the major Linux varieties. The advantage of using a file system based approach is it hides the complexities from the end user and the need for additional drivers. This approach varies from other solutions you may have heard of that use memory as a caching tier as it uses memory as a part of the storage space.

 

The following diagram depicts the Plexistor architecture, the key point of interest for this discussion is the auto tiering between memory and a persistent storage medium. A simple example would be DRAM as a fast memory tier and SSD’s coupled together. However the system also supports other technologies such as NVDIMMs and they are working closely with Intel to be able to support 3D XPoint. The tiering technology aims to have as much of the hot data in memory as possible.

 

Plexistor architecture

The target market for Plexistor is in-memory applications such as MongoDB, Cassandra, Hbase, RockDB and Spark.

 

 

Tiering for long term storage

 

At the opposite end of the spectrum Cohesity are looking not to store your primary but secondary data. Secondary data means thinks like your backups and archive data. Cohesity uses the analogy of a data iceberg to describe that in most companies a small amount of primary data is visible but the majority of an organisations data is hidden away and requires less performance. Cohesity offer a scale out appliance to deal with this secondary data, which they claim can be scaled infinitely.

 

data iceberg

 

Before storage field day started Cohesity was one of the vendors I was most looking forwards to seeing, since secondary data seems like an unloved sector of the industry and not many vendors are talking about it. But it makes total sense, why put all your data on expensive storage when you are only actively using a small proportion of it.

 

Even a secondary storage system could quickly become full and a proportion of this data may never be touched again. Cohesity have dealt with this with the introduction of a cloud storage tier which has integration with all the major public cloud services providers. Cloud can be utilised by Cohesity in 2 main ways Cloud Archival and Cloud Tiering. Cloud Archival moves older data to the cloud for longer term storage Cloud Tiering takes a more dynamic approach by using an algorithm to automatically tier data to the cloud and back as neccesssary.

 

 

Tiering polar opposites

 

This post described two examples at the opposite end of the spectrum where tiering is likely to become more prevalent again. The cloud as a tiering layer for cold long term storage and memory used as a tier for hot data. It makes sense that tiering is very much alive and well since all storage mediums are a compromise on opposing characteristics such as cost, speed and capacity. It is likely most systems will need a mix of characteristics such as capacity and speed and hence it is likely you will need a mix of storage mediums and tiering to allow data movement between them.

 

Liked this one? Connect with me on LinkedIn and Twitter to hear more

Disclosure to the disclosure: This part is boring you probably don’t want to read. Oh really you still here, go on then: My flights, accommodation, food etc was paid for by Tech Field Day but I was under no obligation to write about the events. My time at the event was not paid for.

 

Further reading

Some of my SFD colleagues covered the official announcement from Cohesity on their cloud strategy today (12/4/16)

 

http://tekhead.it/blog/2016/04/cohesity-announces-cloud-integration-services/

 

http://geekfluent.com/2016/04/12/cohesity-announces-cloud-integration-for-their-storage-platform/

 

 

SFD9 Video – What is Plexistor?

 

SFD9 Video – Cohesity cloud integration

 

Matt Lieb – Plexistor – More hope on the horizon for Large In-Memory databases

 

Alex GalbraithYou had me at Tiered Non-Volatile Memory!

 

Justin WarrenSFD9 Prep: Plexistor

 

Justin WarrenSFD9 Pre: Cohesity