Backup gets sexy – Nimble Secondary Flash Array

Since when did backups get sexy? It was only a couple of years ago the then VP of HPE Storage told us that backups were “like eating Broccoli”, “no one likes it although we have to do it because it’s good for us”. Well how things change in a couple of years with new kids on the block like Rubrik and Cohesity offering backup which have buzz around them. Well now HPE is getting in on the action with their shiny new Nimble Secondary Flash Array which is closely integrated with Veeam. The theme between all these vendors as we will see is getting your backups to do more. Not just backing up and forgetting about the data forever.


Let’s start with the hardware and then work back from that. First thing to note is read the product name carefully Secondary Flash Array is actually a hybrid system taking Nimble back to their roots as a hybrid system, in which they used the CASL system which aimed to give close to flash performance with a mix of SSD and spinning disk. All SF Series models consist of up to 21 HDD drives and 3 DFCs (holding up to 6 SSDs). The table below shows the hardware specs for each model (click on it to expand out and be clearly readable)

Today there are two models available the SF100 and SF300. Both models are available with iSCSI and FC connectivity, the numbers appear to denote the rounded down/up raw capacity for the SF100 and SF300 respectively. Although HPE are claiming an 8:1 data reduction giving the SF100 for example a potential deduped capacity of 800TB

Veeam integration

The integration with Veeam is quite simply that Veeam can talk to the storage so that it can leverage storage based snapshots. There are several advantages to this approach first of all you are offloading the heavy lifting to the storage, this removes the issue of VM stun and means that you take very regular snapshots. This allows for a small RPO something which is becoming increasingly important with threats like ransomware.

By integrating with Veeam this allows application consistent snapshots to be taken at a per VM level. This allows for more granular recovery and for a better level of assurance v crash consistent recovery. Those familiar already with Veeam will be aware of their Veeam Explorer tool this effectively cracks open backups to see their contents and allow granular recovery. This has been used previously with other storage vendors and for recovering application items like SQL.

Use cases

This is great but still why do I need a fancy smanchy box for my backups can I not just have some dumb disks? Well you could do that, but the vison for this Veeam and Nimble combo is to bring more power and flexibility to your backup solution, so that you can harness the backups. You could for example use it for standing up your dev and test. Rather than having the cost and complexity of maintaining a separate environment this will allow you to spin up VM’s from backup and isolate them using Veeam labs

The extra zip in performance, would also be of use for the Veeam instant recovery technology where you can spin up a VM’s direct from backups. Plus given the additional performance there is no excuse for not testing those backups using the Virtual Lab functionality.

Rubrik announce Cloud Data Management 4.0 (Alta)

Rubrik have today announced the 4.0 release of their Cloud Data Management platform. Before we go through what’s new let’s have a quick chat about what Rubrik is. The product is essentially a next gen data protection system, someone from their marketing department will probably want to punch me on the nose when they read that, and say but it does so much more! And indeed it does, the system aims to fulfil the functions shown in the following graphic:

Live SQL Mount

I am going to start with the new feature that excites me the most as a storage admin, because it will help us to control our mortal enemy, the SQL DBA. All storage admins know that SQL DBA’s always want stuff yesterday.

Oh and they love RAID 10

Well Rubrik have introduced a feature called live SQL mount which does what it says on the tin. To be clear this is not the live mount of a VM containing a SQL DB but the live mount of an actual SQL DB to a VM. So long as the DB has been backed up with Rubrik it can be easily mounted by clicking live mount within the Rubrik interface. Use cases would include testing of backups, test and dev or quick roll back to a production DB since the DB can be queried as normal


Another one for the DBA’s, Oracle backup support has been added. This is enabled via Rubrik calling the native Oracle backup tool RMAN. This approach means that ASM and all topologies including RAC etc are supported.

The RMAN backups will use Rubrik as a target via an NFS mount point. One of Rubrik’s core design principles is that it is immutable, this essentially means that the backed up data cannot be changed or tampered with. Rubrik have remained consistent in this approach by the NFS target not being continually exposed but only available after being called and made read/write as part of the backup process. This is reassuring that the backup files you may need to rely on in a ransomeware attack would themselves be safe.

Tape Still not dead

I love this story. People have been trying to write off tape for years and yet another start up is adding support for it, and why not tape remains hard to beat for the economics of long term archival. This functionality is enabled utilising Q-Star which allows tape libraries to appear as NFS or SMB targets via a gateway server. The gateway server must to be physical.

Take me to the cloud

Backups of VMware VM’s can now not only be stored in S3 but also started up and ran in AWS. This functionality will be useful to quickly access data on archived VM’s, test and dev, DR etc.


Hyper-V is added as a supported hypervisor . Rubrik is able to support any version of Hyper-V via the connector service but is able to offer the full feature rich backup service to customers running Hyper-V 2016.


Support is also extended for another hypervisor Nutanix Acropolis, this is Nutanix’s own home baked implementation of KVM. The process of the backup is a snapshot is created which is compared to the base, any differences between the two ingested and then the snap removed.

Turbonomic and Hybrid IT

What is hybrid IT?

Hybrid IT must be one of the most used terms in IT at the moment. But one of the questions I keep asking myself is what does it rtureally mean and how does the average company enable it.

Accidental Hero’s

I suspect a good many companies are already running a hybrid model already, if we keep the definition broad. With companies running their standard onsite datacentre plus a SAAS offering such as Office365 for productivity or SalesForce for CRM. Starting with SAAS is simple but taking the next step is complex with questions like how do I work out what else to move from the datacentre, how much will it would cost in the cloud and how would you migrate those systems to the cloud?

Turbonomic 5.9

I attended a blogger briefing for Turbonomic 5.9 last week and it felt like the product may aid with some of these decisions and process. They are not claiming to be the magic 8 ball of the data centre answering all the stuff you ever wanted to know, but their latest offering has a range of tools that certainly look like they will help with the journey.

As a quick reminder Turbonomic assures performance. This means that it monitors your environment and is able to make recommendations to enable best performance from your environment such as scaling up or down a VM’s resources.

Moving to a cloud model

My wife and kids love leaving the lights on around the house. I am sure they work for the electricity company, I have run background checks on all of them but so far this has revealed nothing. With the cloud like electricity you pay per unit of use. In the datacentre of course you pay upfront, and so understanding the cost of moving a VM to the cloud and comparing different clouds required a bit of heavy lifting. Turbonomic is now able to do this for you, since it is able to monitor your on premises datacentre and accurately predict what resources will be required in the cloud and the cost of that configuration. Once you are happy with the configuration you are able to let the Turbonomic product perform the migration to the cloud.

Cost Control

Once you have migrated some workloads to the cloud you are able to view all the costs centrally within Turbonomic. The product is able to show costs from Azure and AWS in a single view. Future costs can be projected and also controlled.


Auto Scaling

Those familiar with Turbonomic will recognise the example I gave earlier of the product changing the resources of a VM to meet performance requirements. Turbonomic now carries that capability across to the cloud with the ability to autoscale VM’s to meet performance requirements.

Placement Rules

Other neat stuff include the ability to set placement rules to ensure compliance is met in terms of onsite cluster placement or in a cloud setting ensuring the correct cloud region /zone