Tiering Isn’t Dead

With advent of flash came a whole host of new storage vendors, with many claiming a new from the ground up architecture was required to get the full benefits of flash. This meant for many of the new players they chose to go down the all flash route, which allowed a strong disk is dead marketing message. Going all flash also allowed the system to be recognised in the Gartner all flash magic quadrant, which has as one of it’s qualifying criteria that no spinning disk must be able to be added to the system.  The net effect of this was that the vendors who went down a pure flash route had no need for any kind of tiering technology, because there was only flash and that’s all you need right?


But in technology what goes around comes around and during the course of Storage Field Day 9 we saw several vendors talking about tiering. But this was not the traditional tiering we are used to but using new methods and storage mediums.



Tiering for performance


Potentially the most unique product we heard about during SFD9 which offered tiering was Plexistor. Plexistor are a start-up in the relatively early stages of business, currently with 18 employees. The company produces what it describes as a software defined memory solution capable of providing high capacity storage at near memory speed. Plexistor’s solution is a POSIX file system that can work with most of the major Linux varieties. The advantage of using a file system based approach is it hides the complexities from the end user and the need for additional drivers. This approach varies from other solutions you may have heard of that use memory as a caching tier as it uses memory as a part of the storage space.


The following diagram depicts the Plexistor architecture, the key point of interest for this discussion is the auto tiering between memory and a persistent storage medium. A simple example would be DRAM as a fast memory tier and SSD’s coupled together. However the system also supports other technologies such as NVDIMMs and they are working closely with Intel to be able to support 3D XPoint. The tiering technology aims to have as much of the hot data in memory as possible.


Plexistor architecture

The target market for Plexistor is in-memory applications such as MongoDB, Cassandra, Hbase, RockDB and Spark.



Tiering for long term storage


At the opposite end of the spectrum Cohesity are looking not to store your primary but secondary data. Secondary data means thinks like your backups and archive data. Cohesity uses the analogy of a data iceberg to describe that in most companies a small amount of primary data is visible but the majority of an organisations data is hidden away and requires less performance. Cohesity offer a scale out appliance to deal with this secondary data, which they claim can be scaled infinitely.


data iceberg


Before storage field day started Cohesity was one of the vendors I was most looking forwards to seeing, since secondary data seems like an unloved sector of the industry and not many vendors are talking about it. But it makes total sense, why put all your data on expensive storage when you are only actively using a small proportion of it.


Even a secondary storage system could quickly become full and a proportion of this data may never be touched again. Cohesity have dealt with this with the introduction of a cloud storage tier which has integration with all the major public cloud services providers. Cloud can be utilised by Cohesity in 2 main ways Cloud Archival and Cloud Tiering. Cloud Archival moves older data to the cloud for longer term storage Cloud Tiering takes a more dynamic approach by using an algorithm to automatically tier data to the cloud and back as neccesssary.



Tiering polar opposites


This post described two examples at the opposite end of the spectrum where tiering is likely to become more prevalent again. The cloud as a tiering layer for cold long term storage and memory used as a tier for hot data. It makes sense that tiering is very much alive and well since all storage mediums are a compromise on opposing characteristics such as cost, speed and capacity. It is likely most systems will need a mix of characteristics such as capacity and speed and hence it is likely you will need a mix of storage mediums and tiering to allow data movement between them.


Liked this one? Connect with me on LinkedIn and Twitter to hear more

Disclosure to the disclosure: This part is boring you probably don’t want to read. Oh really you still here, go on then: My flights, accommodation, food etc was paid for by Tech Field Day but I was under no obligation to write about the events. My time at the event was not paid for.


Further reading

Some of my SFD colleagues covered the official announcement from Cohesity on their cloud strategy today (12/4/16)







SFD9 Video – What is Plexistor?


SFD9 Video – Cohesity cloud integration


Matt Lieb – Plexistor – More hope on the horizon for Large In-Memory databases


Alex GalbraithYou had me at Tiered Non-Volatile Memory!


Justin WarrenSFD9 Prep: Plexistor


Justin WarrenSFD9 Pre: Cohesity



Published by

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.