Tiering Isn’t Dead

With advent of flash came a whole host of new storage vendors, with many claiming a new from the ground up architecture was required to get the full benefits of flash. This meant for many of the new players they chose to go down the all flash route, which allowed a strong disk is dead marketing message. Going all flash also allowed the system to be recognised in the Gartner all flash magic quadrant, which has as one of it’s qualifying criteria that no spinning disk must be able to be added to the system.  The net effect of this was that the vendors who went down a pure flash route had no need for any kind of tiering technology, because there was only flash and that’s all you need right?

 

But in technology what goes around comes around and during the course of Storage Field Day 9 we saw several vendors talking about tiering. But this was not the traditional tiering we are used to but using new methods and storage mediums.

 

 

Tiering for performance

 

Potentially the most unique product we heard about during SFD9 which offered tiering was Plexistor. Plexistor are a start-up in the relatively early stages of business, currently with 18 employees. The company produces what it describes as a software defined memory solution capable of providing high capacity storage at near memory speed. Plexistor’s solution is a POSIX file system that can work with most of the major Linux varieties. The advantage of using a file system based approach is it hides the complexities from the end user and the need for additional drivers. This approach varies from other solutions you may have heard of that use memory as a caching tier as it uses memory as a part of the storage space.

 

The following diagram depicts the Plexistor architecture, the key point of interest for this discussion is the auto tiering between memory and a persistent storage medium. A simple example would be DRAM as a fast memory tier and SSD’s coupled together. However the system also supports other technologies such as NVDIMMs and they are working closely with Intel to be able to support 3D XPoint. The tiering technology aims to have as much of the hot data in memory as possible.

 

Plexistor architecture

The target market for Plexistor is in-memory applications such as MongoDB, Cassandra, Hbase, RockDB and Spark.

 

 

Tiering for long term storage

 

At the opposite end of the spectrum Cohesity are looking not to store your primary but secondary data. Secondary data means thinks like your backups and archive data. Cohesity uses the analogy of a data iceberg to describe that in most companies a small amount of primary data is visible but the majority of an organisations data is hidden away and requires less performance. Cohesity offer a scale out appliance to deal with this secondary data, which they claim can be scaled infinitely.

 

data iceberg

 

Before storage field day started Cohesity was one of the vendors I was most looking forwards to seeing, since secondary data seems like an unloved sector of the industry and not many vendors are talking about it. But it makes total sense, why put all your data on expensive storage when you are only actively using a small proportion of it.

 

Even a secondary storage system could quickly become full and a proportion of this data may never be touched again. Cohesity have dealt with this with the introduction of a cloud storage tier which has integration with all the major public cloud services providers. Cloud can be utilised by Cohesity in 2 main ways Cloud Archival and Cloud Tiering. Cloud Archival moves older data to the cloud for longer term storage Cloud Tiering takes a more dynamic approach by using an algorithm to automatically tier data to the cloud and back as neccesssary.

 

 

Tiering polar opposites

 

This post described two examples at the opposite end of the spectrum where tiering is likely to become more prevalent again. The cloud as a tiering layer for cold long term storage and memory used as a tier for hot data. It makes sense that tiering is very much alive and well since all storage mediums are a compromise on opposing characteristics such as cost, speed and capacity. It is likely most systems will need a mix of characteristics such as capacity and speed and hence it is likely you will need a mix of storage mediums and tiering to allow data movement between them.

 

Liked this one? Connect with me on LinkedIn and Twitter to hear more

Disclosure to the disclosure: This part is boring you probably don’t want to read. Oh really you still here, go on then: My flights, accommodation, food etc was paid for by Tech Field Day but I was under no obligation to write about the events. My time at the event was not paid for.

 

Further reading

Some of my SFD colleagues covered the official announcement from Cohesity on their cloud strategy today (12/4/16)

 

http://tekhead.it/blog/2016/04/cohesity-announces-cloud-integration-services/

 

http://geekfluent.com/2016/04/12/cohesity-announces-cloud-integration-for-their-storage-platform/

 

 

SFD9 Video – What is Plexistor?

 

SFD9 Video – Cohesity cloud integration

 

Matt Lieb – Plexistor – More hope on the horizon for Large In-Memory databases

 

Alex GalbraithYou had me at Tiered Non-Volatile Memory!

 

Justin WarrenSFD9 Prep: Plexistor

 

Justin WarrenSFD9 Pre: Cohesity

 

 

What is the best storage architecture?

I recently attended Storage Field Day 9 viewing presentations from a range of storage vendors. Many of the vendors focused their presentations on their system architecture and the perceived advantages or USPs of their approach. Each vendor clearly felt passionate and believed that their way was the best. What was clear was that whilst common themes ran through the designs there was also often other aspects to the design which were at odds to what other vendors were proposing. An example of one of these design debates would be commodity v custom hardware. The pro commodity hardware advocates would argue  it gives the best price point and flexibility whereas those pushing custom hardware would argue it is built for a purpose and therefore gives the greatest performance and resilience.

 

Another theme that became clear was that any product is a compromise, there is no such thing as the perfect product. Every architectural decision will be at odds with its opposing factor e.g. cost v quality, resilience v speed etc.

 

With Vendors all lining up to tell us why their architecture is correct and whilst also being willing at times to tell us why another vendors design decisions are incorrect through FUD, how can we find what truly is the best system for our business. We will start the discussion by looking at what may seem like an unrelated discipline, sports.

 

Styles make fights

Before many boxing matches both competitors enter into ‘trash-talk’, not only ridiculing the other boxer but also focusing on why their style or method of doing things is better. “My jab is too strong and my movement will be too quick” or “I am too strong I will be too much for him”. But as any bookie will tell you a perceived advantage on paper does not always add up, upsets happen. What may appear a superior method on paper in practical terms is not up to the job. It is only when putting the boxers to the test is it possible to see whose method is superior.

Mayweather

In my unsuccessful pursuit to master the perfect golf swing I have been to the driving range and have witnessed guys with the most awful technique I have ever seen swinging wildly, off balance with no alignment but then consistently hitting perfect shots. The take home lesson is again what may appear a poor method when put to practical use produces great results.

bad golf swing

What’s this got to do with storage?

Does this mean that architecture is unimportant and can be ignored? Short answer no.   Clearly you need to find an architecture that can deliver your demands in terms of your performance, availability and features. But the take home message is that you cannot look at architecture alone, what looks like a stunning concept may have little practical benefit to performance.

 

So how can customers cut through all this ‘my systems better than your system’? Only with the transparency and clarity that would come with stats. When you buy a car you can easily see all the key stats related to performance, economy, size and weight. This gives customers clarity and allows them to make informed buying decisions. If car A does 10MPG than car B and your key buying criteria is efficiency then you have a clear decision and choose car A. To an extent who cares how the engine was designed and if car B’s manufacturer still says their architecture is better, the figures have given transparency that car A is more efficient and allowed an informed buying decision.

 

The closest we have to an agreed standard at the moment is the SPC-1 and SPC-2 tests which measure maximum transactions and throughput respectively. The tests have been criticised for focusing on max flat-out performance with vendors providing unrealistic configurations to maximise results. Other criticisms have been that the current SPC-1 tests do not allow dedupe which effectively stops some vendors with an always on dedupe design from taking part. However today they provide the closest to a standard bench mark we have.

 

The way forwards to establish an industry standard benchmark is for the industry to regulate itself and come up with a standard testing methodology. Trying to get the vendors to agree to a set of testing that is realistic and is suitable to all parties may be more difficult than searching for the pot of gold at the end of the rainbow. However it would encourage competition, those with strong products would have nothing to fear and vendors would know the kind of test their systems would be put through during the design phase.

 

 

Conclusion

Any design is a compromise, which is the best product is the one that suits your organisation in terms of the required benefits you have defined . To aid that decision to find the perfect fit clear industry standard performance statistics are required. Until this happens the only alternative is to shortlist systems and run a proof of concept to find the system that is truly best for your environment.

 

Let me know if you think vendors will ever agree on a standard testing method and what stats would you like to see?

 

Liked this one? Connect with me on LinkedIn and Twitter to hear more

 

Disclosure to the disclosure: This part is boring you probably don’t want to read. Oh really you still here, go on then: My flights, accommodation, food etc was paid for by Tech Field Day but I was under no obligation to write about the events. My time at the event was not paid for.

 

 

 

#SFD 9 Preview – NetApp Too Many Shoes?

Too many shoes

My wife has too many bags and far too many shoes. She assures me that she needs this many and that “she needs different shoes for different occasions”. I have two pairs of shoes a posh pair and a scruffy pair and believe she has too many pairs of shoes.

shoes

 

NetApp who will attend Storage Field Day 9 have at the last count 3 different flash SAN’s. Like my wife being adamant she needs a shoe for every occasion, I will be interested to see if NetApp can convince me they need a different SAN for every occasion. This must be confusing for customers and the sales team, plus marketing and development effort is being split 3 ways.  Also is it desirable to have 3 different system types to manage from a single vendor, a key benefits of buying from a single vendor is having a common management tool and a narrower skills set requirement.

 

Other questions

Many other questions have been asked about NetApp and their strategic position in the market, not least since they soon will be the last large standalone storage company. As discussed in a previous SFD 9 preview post many vendors are aiming to control the datacentre with their hyper-converged offerings. NetApp currently lacks a scalable software-defined offering plus the compute hardware to couple it with to create an appliance.

 

Rich History

However NetApp has a rich history of innovation and didn’t become what will be the worlds largest storage company post Dell/ EMC and in the fortune 500 by accident. They also haven’t been afraid to go their own way in the past shunning tiering and then flash as a storage medium, at least for a while.

This is possibly the presentation I am looking forward to most since the company at a cross roads but with some exciting new tech on the table. Also the presenters lined up are true NetApp royalty, so make sure you tune into this one.

 

The future?

The latest addition to NetApp’s portfolio and the potential future jewel in its crown is SolidFire. I have lots to learn about the technology but almost everything I have heard have ever about this system is positive. Purchased at the end of 2015 for $1.2bn it was generally agreed in the community it was a great price. SolidFire is an all flash scale out storage system capable of starting at 20TB and growing to 3.5PB and 7.5M IOPs. Each 1U node has 10SSD’s and data is spread across all nodes. New nodes can be non-disruptively added and removed from the system and SolidFire automatically balances the blocks across all nodes.

 

I will be really interested to hear about how the portfolio fits together and how they will be maximising their return on the SolidFire acquisition. Will it remain a webscale proposition for the largest customers, or could the technology be applied to smaller businesses and how would this affect its existing portfolio. Of the many shoes that NetApp has will this be the shoe that fits Cindarella style.

Cinderella

 

Socks

Another key issue I will need to get to the bottom of is the SolidFire socks. I will be asking deep and searching questions like:

  • What is the whole sock think about?
  • Will they bringing NetApp socks out now?
  • Can the NetApp and SolidFire socks be combined on the same day? What are the best practices around this?

For the low down on the socks, and a full day with NetApp which promises to be unmissable tune into Storage Field Day 9.

 

If you are a NetApp fan check out this nice Build Your Own NetApp ONTAP 9 Lab eBook.

socks