Adaptive Flash Cache – Deep Dive

Last week I posted a quick overview of the latest feature announced for 3Par – Adaptive Flash Cache. HP have provided me with some more detailed documents regarding HP Adaptive Flash Cache technology and so today I wanted to take a more in-depth look into it.

 

Caching 101

Let’s start at the beginning, cache is traditionally memory that acts as a buffer between IO requests and disk, temporarily storing data to reduce the service time of requests. The cache will contain a mixture of write requests that are waiting to be destaged to disk and data related to reads that have recently been requested or prefetched using a read ahead algorithm. Each read or write request that arrives at the SAN will first check if the data is in cache and if it finds it this is called a cache hit. The response time to the host will be significantly quicker than if the data had been retrieved from disk, this behaviour is shown in the diagram below.

Why Flash Cache?

Cache has traditionally been provided by DRAM memory which whilst providing the quickest response times is expensive and so is limited in size in most controllers. OK so we want a bigger cache to maximise cache hits and minimise response time, but DRAM is expensive so enter flash the saviour of every one of us!

OK not Flash Gordon, but enter flash cache technologies which allow a caching area to be extended by utilising SSD’s. SSD cache will not provide the same performance as DRAM cache but it is much cheaper and can therefore be scaled larger economically. The aim of flash cache is simple, to expand the size available to cache and to thus increase the volume of data stored increasing the chances of a cache hit reducing response time.

 

HP’s Answer

3Par had a hole in its armour given that the competition has long had flash cache available as part of their storage systems. HP has now plugged this gap with a technology it is calling HP Adaptive Flash Cache. A standard 3Par provides DRAM memory within the controllers for caching as the DRAM starts to become full data is flushed to disk so it is no longer available to cache. In a system enabled with Advance Flash Cache the DRAM will continue to be the primary cache for the system however when the DRAM becomes 90% full instead of the data being flushed to disk it will destaged to the SSD’s in the system, future host I/O will be redirected and served from flash cache. Data is selectively destaged from DRAM to Advanced Flash Cache in 16Kb pages. The pages rejected from being admitted to the Advanced Flash Cache are those that are least likely to produce a hit and include I/O larger than 64KB, sequential read/writes plus data that is already stored on SSD.

 

A write will continue to be serviced in exactly the same way as above even with an AFC (Advanced Flash Cache) implementation as it is only read data that can be read from AFC. The AFC is used by writes only to invalidate data not for retrieval.

 

We can see the process with reads is where it gets interesting. When a read request is received DRAM is still used as the primary cache and is checked first, next the AFC is checked and if the data is present on the SSD’s a cache hit is registered and the data does not need to be serviced from spinning disk.

Flushing data from the SSD’s that has been placed there by AFC occurs through a LRU (Least recently used) algorithm. When data arrives in the AFC it is admitted at normal temperature it will be promoted to hot when data is accessed frequently and marked cold as it eventually ages and will then be subject to eviction from flash cache. So to summarise what we are seeing here is essentially a tiered cache system, DRAM is used as primary cache, then destages to AFC which then further destages to spinning disk as data becomes cold. The take home benefit from all of this is a larger cache providing improved response times for random read workloads.

 

The good thing compared to other offerings like EMC’s FAST cache is that the SSD’s used by AFC don’t need to be dedicated to cache they be used in a standard manner for storing data as well.

Managing Flash Cache

If you’re thinking this all sounds great but is it any good for me the handy thing is HP have built in a simulation mode which doesn’t even require any SSD’s present in the system. Simulation mode allows you to look at your cache stats and see if AFC would be beneficial to your system. The output below is from one of new statcache commands available, the FMP (flash cache memory page)column represents AFC and a hit rate here of zero would suggest that all cache requirements are already been covered by internal DRAM cache. A good candidate would have a hit rate in AFC equal to the on-board cache or greater.

AFC utilises RAID 1 logical disks and the recommendation is that it is striped across all available SSD’s to maximise performance. Initially managing AFC will be via the CLI only with management console support to follow. What is neat is that AFC can be enabled system wide or on specific volumes. If you go down the specific volumes route you apply the settings via virtual volume sets. This essentially allows you to prioritise important volumes by including only them in virtual volume sets with access to flash cache.

To find virtual volumes that are good candidates for AFC the recommendation is to use a mix of cache statistics and vlun stats for identification. The ideal candidates will be vluns with high read requests but low cache hits demonstrating it’s a random workload.

AFC can co-exist with all current 3Par features including adaptive optimisation.

 

Sweet so how do I get it!

AFC will be available from 3Par OS 3.2.1, will be included as part of the base OS and you will need a mixture of SSD’s and spinning disk.   The 7000 series will need a minimum of 4 SSD drives and will support up 768GB per node pair. The 10,000 series will need a minimum of 8 SSD drives and will support up 2TB per node pair.

 

Final Thoughts

So today we have seen that you can never have too many Flash Gordon pictures in any post, plus HP have added to its already strong line up another key feature. Adaptive Optimisation has always performed well for me and does a good job of moving hot data that has a predictable workload to faster disks however you could be left lagging behind with random read workloads. AFC will plug this gap plus the reduced backend load will in turn also benefit write requests.

 

Follow 3ParDude on Twitter here

 

 

 

 

 

 

Published by

10 thoughts on “Adaptive Flash Cache – Deep Dive

  1. Do you know maximums for allowed cache for different models and number of controllers (i.e each 7400 node allows 256GB of SSD to be used as cache and so on..)?

  2. The 7000 series will need a minimum of 4 SSD drives and will support up 768GB per node pair . The 10,000 series will need a minimum of 8 SSD drives and will support up 2TB per node pair.

  3. Are there any guidelines in terms of sizing AFC? I don’t want to use too much SSD as cache, but I also want to maximize the benefits as well.

    1. Each model has a hard limit in terms of the maximum size AFC can be for example 768GB for the 7200. The minimum size it can be is 64GB per host name. AFC will obviously take up room on your SSD’s so you just need to try and find the sweet spot between the space it takes up and the number of hits you are getting. The best way to test this is using the AFC in simulation mode, you can set up the simulation mode using the command createflashcache -sim

  4. Dear D8tadude,

    Can you write a separate post on which apps shall use which RAID groups ideally like for Oracle DB, MS exchange, etc?
    How much storage CPU utilization in % load is optimal for 3PAR 8400?
    & Is it okay to utilize almost 95 – 98% of all the disk space one has in 3PAR?

    1. Hi thanks for the feedback. I would start everything with RAID 6 , this is now the default for 3PAR. As its a wide striped system with ASIC the difference between RAID types isn’t massive. Start with RAID 6 and then just use RAID 10 is the application is demanding it.

      CPU utilisation on 3PAR is generally low due to the ASIC. If you have specific concerns I would speak to support on this one.

      It is OK to use a CPG up to 100% if no volumes are directly presented from that disk type. For example you can let AO use up your SSD disks to 100% as long as there are no volumes directly in there. For standard CPG’s you need to use room for the growth of your volumes and to allow time to purchase additional capacity if needed.

  5. What if AFC is running and I want to test a bigger AFC size using the simulator. Can I use the simulator with AFC running?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.