What is the best storage architecture?

I recently attended Storage Field Day 9 viewing presentations from a range of storage vendors. Many of the vendors focused their presentations on their system architecture and the perceived advantages or USPs of their approach. Each vendor clearly felt passionate and believed that their way was the best. What was clear was that whilst common themes ran through the designs there was also often other aspects to the design which were at odds to what other vendors were proposing. An example of one of these design debates would be commodity v custom hardware. The pro commodity hardware advocates would argue  it gives the best price point and flexibility whereas those pushing custom hardware would argue it is built for a purpose and therefore gives the greatest performance and resilience.

 

Another theme that became clear was that any product is a compromise, there is no such thing as the perfect product. Every architectural decision will be at odds with its opposing factor e.g. cost v quality, resilience v speed etc.

 

With Vendors all lining up to tell us why their architecture is correct and whilst also being willing at times to tell us why another vendors design decisions are incorrect through FUD, how can we find what truly is the best system for our business. We will start the discussion by looking at what may seem like an unrelated discipline, sports.

 

Styles make fights

Before many boxing matches both competitors enter into ‘trash-talk’, not only ridiculing the other boxer but also focusing on why their style or method of doing things is better. “My jab is too strong and my movement will be too quick” or “I am too strong I will be too much for him”. But as any bookie will tell you a perceived advantage on paper does not always add up, upsets happen. What may appear a superior method on paper in practical terms is not up to the job. It is only when putting the boxers to the test is it possible to see whose method is superior.

Mayweather

In my unsuccessful pursuit to master the perfect golf swing I have been to the driving range and have witnessed guys with the most awful technique I have ever seen swinging wildly, off balance with no alignment but then consistently hitting perfect shots. The take home lesson is again what may appear a poor method when put to practical use produces great results.

bad golf swing

What’s this got to do with storage?

Does this mean that architecture is unimportant and can be ignored? Short answer no.   Clearly you need to find an architecture that can deliver your demands in terms of your performance, availability and features. But the take home message is that you cannot look at architecture alone, what looks like a stunning concept may have little practical benefit to performance.

 

So how can customers cut through all this ‘my systems better than your system’? Only with the transparency and clarity that would come with stats. When you buy a car you can easily see all the key stats related to performance, economy, size and weight. This gives customers clarity and allows them to make informed buying decisions. If car A does 10MPG than car B and your key buying criteria is efficiency then you have a clear decision and choose car A. To an extent who cares how the engine was designed and if car B’s manufacturer still says their architecture is better, the figures have given transparency that car A is more efficient and allowed an informed buying decision.

 

The closest we have to an agreed standard at the moment is the SPC-1 and SPC-2 tests which measure maximum transactions and throughput respectively. The tests have been criticised for focusing on max flat-out performance with vendors providing unrealistic configurations to maximise results. Other criticisms have been that the current SPC-1 tests do not allow dedupe which effectively stops some vendors with an always on dedupe design from taking part. However today they provide the closest to a standard bench mark we have.

 

The way forwards to establish an industry standard benchmark is for the industry to regulate itself and come up with a standard testing methodology. Trying to get the vendors to agree to a set of testing that is realistic and is suitable to all parties may be more difficult than searching for the pot of gold at the end of the rainbow. However it would encourage competition, those with strong products would have nothing to fear and vendors would know the kind of test their systems would be put through during the design phase.

 

 

Conclusion

Any design is a compromise, which is the best product is the one that suits your organisation in terms of the required benefits you have defined . To aid that decision to find the perfect fit clear industry standard performance statistics are required. Until this happens the only alternative is to shortlist systems and run a proof of concept to find the system that is truly best for your environment.

 

Let me know if you think vendors will ever agree on a standard testing method and what stats would you like to see?

 

Liked this one? Connect with me on LinkedIn and Twitter to hear more

 

Disclosure to the disclosure: This part is boring you probably don’t want to read. Oh really you still here, go on then: My flights, accommodation, food etc was paid for by Tech Field Day but I was under no obligation to write about the events. My time at the event was not paid for.

 

 

 

New 3PAR model + SPC1 result

This week saw 2 significant pieces of 3PAR news released. The first that a new model has been added to the 20,000 range the 20,840. I have to admit when I first read this I had to check the date on the article as I was convinced there already was a 20,840. This now keeps the 20,000, range in step with the 8000 series in terms of the models available. The 20,000 series now lines up like this:

20,450 – 4 node all flash

20,800 – 8 node hybrid

20,850 – 8 node all flash

20,840 – 8 node converged flash

Those that remember the launch of the 7440 will also remember this was when HPE first started to use the term converged flash. The 20,840 is also being called converged flash and it basically means you get all the power of the all flash model but are also able to add spinning disk if you wish. There is no real need for any model to be all flash and denied the ability to use spinning disk but unfortunately hybrid players are effectively forced into that requirement to be considered an all flash model by Gartner.

In terms of specs looking at Quick Specs the 20,840 looks identical to the 20,850 in all aspects apart from its ability to support spinning disk. The 20,840 supports up to 3584 GiB cache per node v 1792 GiB on the standard hybrid 20,800. The new model will run the standard 3PAR OS and so management and feature set will be as seen on the other models.

 

At the same time HPE announced a new 8TB disk NL disk which will be available from the end of March. The new 20,840 model will be available for order and shipping immediately.

 

SPC1 result

Earlier in the week HPE announced they had put the all flash 8450 through its paces in the SPC1 benchmark. SPC1 is an independent industry test which measures a systems max IOPs and associated latency. The 8450 finished 10th overall with 545,164 max IOPS. The results are further split down taking into account the cost of the system, the 8450 was classified as a mid-range system and achieved the 2nd highest max IOPs in the mid-range category. The graph below plots how latency increases as each system gets closer to its max IOPs, the 3PAR 8450 is show in green.

Graph1

In terms of performance v price 3PAR 8450 scored the best out of all systems tested. This test measures IOPs per dollar. This is basically total system price divided by max IOPs in SPC1 test, the results are shown in the chart below.

 

Graph2

 

To stay in touch with more 3PAR news and tips connect with me on LinkedIn and Twitter.