Veeam v10 NAS

Veeam v10 Roundup

Veeam Backup and Replication v10 has now been made publicly available.  The update contains 150 enhancements, let’s take a look at a few of them

NAS Protection

One of the most requested features from customers has been NAS protection, that is now available natively in V10.  The system is software based and hardware agnostic, able to backup SMB and NFS shares.  The NAS feature is also able to backup SMB file shares hosted on Windows systems and NFS shares on Linux servers.  The v10 implementation of NAS backup utilises changed file tracking allowing incremental jobs to backup only what has changed.

Veeam v10 NAS

Credit Veeam for image

Setting up NAS backups will look familiar to those already working with Veeam, file proxies act as the data movers, a new role the cache repository is responsible for the changed file tracking.  Backups can be stored like any other Veeam backup in repositories but can also be tiered to the public cloud for longer term retention

Multi-VM Instant Recovery

Instant Recovery has been available for some years now , this allows the immediate recovery of a VM by running it from the backup.  The updated VM Instant Recovery enhances performance by methods including caching to RAM and read ahead.  A knock on impact of this enhanced performance has been that multiple VM’s can now be instantly recovered in a single operation.

vSphere Any Backup Restore

It is now possible to restore any backups to your vSphere environment no matter what format the original backup was.  This for example allows you to restore backups of physical servers, Hyper-V VM’s or cloud backups to vSphere.

Veeam Cloud Tier

Veeam 9 update 4 added a move functionality to a cloud based capacity tier, this allowed older data to be aged off to the capacity tier. v10 adds a “copy” feature which allows backups to be copied to object storage as soon as the original backup is created. This allows data to be copied offsite for redundancy purposes and as the capacity tier is S3 object based this also acts as a secondary format to store backups.

Ransomware Protection

Given recent high profile ransomware cases v10 now offers the capability to store data online but in an immutable format to prevent attacks.  It is based on the new Veeam cloud tier copy facility, plus the immutable data option for S3. Once the data has been copied to S3 with the immutable option for S3 set it cannot be changed for the length of time specified, protecting backup data from any kind of change be that accidental, ransomware or a rouge admin.

Veeam v10

Credit Veeam for image

Linux Proxies

There is now greater flexibility for environments that favour Linux, allowing the deployment of backup proxies that are Linux based.

Vendor Integration

Good news for those using HPE storage, Primera is added as a supported primary storage array allowing all storage snapshot integration capabilities.  StoreOnce also added for support for catalyst copy.

Checkout this podcast with Rick Vanover and Calvin Zito for the full story on Veeam v10 and HPE integration.

To see all the new features check out the v10 what’s new document

2 SSMC Topology Insights

HPE 3PAR & Primera SSMC Topology Insights

Hello, starting with SSMC 3.6, HPE introduced a new feature called topology Insights.

Let me introduce what topology Insights is and how to enable it.

This guest post is brought to you by Armin Kerl, if you fancy trying you hand at blogging check out our guest posting opportunities.

What is SSMC Topology Insights?

Topology Insights allows a consolidated view of both storage and the VMware layer.  To see it within SSMC go to: VMWARE > Virtual Machines

1 SSMC Menu

Change to: Topology view from the drop down in the right hand window

2 SSMC Topology Insights

You will see all VM’s and the relationship between:

VM <> VMDK <> Datastore <> ESXi Server <> Virtual Volume <> Storage

Also, any performance impacts are displayed on top with a selectable time line, red shows a problem.

How to get Topology Insights?

You need at least:

  • SSMC 3.6
  • 3PAR with OS 3.3.x
  • Service Processor 5.x
  • VMware vCenter (no Hyper-V currently)

In the SSMC go to Settings

3 SSMC Settings

In Section “Application” enable “Topology”

4 Enable Topology Insights

and VMware > Virtual machines

5 SSMC Settings

Now go to “Storage Systems” > “Systems” > “Actions”

6 Set SP credentials

Enter the “Set SP credentials” here.

If it is successful, check under “Systems” > “Settings” if the vCenters entry is here.

7 Add vCentre

If not, go to edit and enter here.

If you don’t see “Set SP credentials”?

Like this:

8 Missing Set SP credentials

Or get this Message?

9 Missing service processor

I have had several customer sites, where the entry did not appear.

Most of the time these are storage systems and SP, which have been upgraded from 3.2.x to 3.3.x.

The following is for information only, it is recommended to contact support before proceeding.

It is a little bit Tricky, but here is a way to fix it:

Login to the 3PAR with SSH (Putty).

Enter: cli% getsys -showsysobjs

Look for {ServiceProcessorCookie {}}

Wrong: … {SessionTimeout 10800} {ServiceProcessorCookie {}} …

Right: … {SessionTimeout 10800} {ServiceProcessorCookie SPxxxyyyyzzzz:IP.of.the.SP}

If the SP Cookie is empty “{}”, we need to fix it.

Login to the SP 5.x Website and go to the Service Processor Overview Homepage.

10 Service processor values

Here we see the SPIP = IP address and SPID = SP ID

Now set the missing Values:

cli% setsys ServiceProcessorCookie SPID:SPIP

Example: setsys ServiceProcessorCookie SPCZ123456789:192.168.10.23

Check again: Enter: cli% getsys -showsysobjs

Now OK?

Go back to the SSMC and the “Set SP credential” should appear.
This fix worked for me every Time.

Thank you, folks for reading.

Armin

HPE Cloud Volumes and Multi-Cloud

I recently attended a briefing from HPE on Cloud Volumes. Cloud Volumes is a technology that already existed within the Nimble portfolio before the HPE acquisition. 2 years on from becoming part of HPE what exactly is Cloud Volumes? What is it  useful for and how does it fit within HPE storage and cloud strategy?

You bring the compute, I’ll bring the storage

barbecue

I love a barbecue party with a group of friends, especially when everyone brings a bit of their own food. You never know what you’re going to get and hopefully you end up with a better mix of food than if you provide it all yourself. I always bring the crisps because it’s easy.

HPE Cloud Volumes  offers  to bring block storage to the cloud party, with the compute being provided by one of the major cloud providers; Google, AWS or Azure. We will deal with the full process in detail later, but in short you create a Cloud Volumes account through a web portal, create the VM with the cloud provider, then select HPE Cloud Volumes as the disk target.

HPE provides the storage by either co-locating or locating very near to the cloud hyperscalers data centres in order to minimise latency. The HPE data centres contain HPE Nimble storage arrays which provides the underlying storage presented through to Cloud Volumes. HPE use their own InfoSight system to ensure availability, capacity and scale are met through its predictive analytics system.

There is a choice between all flash or hybrid disks, both of which are able to offer the same level of IOPS. But the IOPS guarantee differs between them, 95% percent for hybrid and 99% for flash. Cloud Volumes is available globally but the guarantee will only exist in regions where HPE data centres exist.

Why?

The first question many people have is why don’t I just use the storage natively available from the cloud provider? Cloud Volumes was initially born out of the desire to provide block storage to more traditional applications. Many cloud native apps were designed to take advantage of cloud storage such as using object based S3 or Glacier for archive. Those born in the cloud type apps would receive little benefit from Cloud Volumes. Although container based applications can take advantage of the persistent storage offered by Cloud Volumes. Currently Docker, Kubernetes and Mesosphere are supported.

Traditional applications such as Oracle and SQL grew up in an on-premises environment traditionally based on block storage. Cloud Volumes was initially designed to provide the availability performance and features these applications require.  It’s fair to say Cloud Volumes has evolved since the initial vision to be a more complete offering for an enterprise’s cloud strategy. Let’s look at some of the ways HPE envisage this:

Data mobility

Cloud lock-in is a concern of many companies. Cloud providers do not natively provide the tools to easily move data to another provider. Plus there is the challenge of data gravity, it is a physical challenge to move large volumes of data this can make operating a multi-cloud strategy a challenge.

With Cloud Volumes, since your compute and storage are provided separately, the compute provider can easily be changed.  It is just a case of spinning up a new VM and pointing it back at the Cloud Volume. This data mobility allows the right data to be in the right place at the right time and allows these changes to be made on the fly.  This ability to provision storage to multiple destinations on the fly could facilitate cloud bursting, allowing for peaks in demand.

Cloud migration

All Nimble systems natively support replication to Cloud Volumes. This allows for a very simple method for cloud migration and the ability to bring back on prem if necessary.

Cost

One of the barriers to data movement with the current cloud providers is cost, where there are egress charges to pay for pulling data back from a cloud provider. HPE has no cost associated with any movement of data, be that to the cloud, back from the cloud or indeed to completely exit the service.

Data services

Cloud Volumes is able to provide instant snapshots and clones. It uses snapshot technology which only records changed blocks, so you are able to store a large number of clones and snapshots whilst only paying for the changed data.

Secure 256-bit AES encryption is also available.

DR and test/dev

Cloud Volumes could easily be integrated within DR or test/dev solution.  Snapshots taken on site can be replicated to the cloud for further testing or DR availability.

Visibility

InfoSight is able to track and monitor not only your local Nimble arrays but also Cloud Volumes. Allowing a simple single place to predict and manage storage

Cloud Volumes Walkthrough

Setting up a Cloud Volume is a  two-step process, firstly create the Cloud Volume then secondly attach it to a VM.  Let’s take a look at the process:

Creating a Cloud Volume 

  • Open the Cloud volumes website
  • Choose to create a new cloud volume then you need to run through a few settings:

create cloud volume 1

    • Cloud provider –  choose cloud provider, region and cloud network
    • Volume settings – name the volume, choose the required size, performance in terms of IOPS and hybrid or flash
    • Protection – Select to enable a snapshot schedule or encryption

create cloud volume 2

Attach to a VM

  • In the Cloud volumes website choose to attach to a VM.  This will then bring up a script you can copy
  • In the cloud VM run the script and this will complete all the operations to attach the Cloud Volume to the VM for you

cloud volumes connection script

Once the volume is attached you would treat like any other kind of new disk and would proceed to bring it online and format it etc

Take a look at this complete walk through video

Final Thoughts

Today Cloud Volumes provides an interesting data services and data mobility play which could offer organisations with a hybrid cloud or multi cloud policy greater flexibility.  Using native cloud storage requires organisations to go “all in” committing data with a given cloud provider due to the pull of data gravity. Cloud Volumes removes that major decision point of where data is stored since it can be easily re-targeted, be that due to a change in cloud provider or simply an operational need i.e. cloud bursting or testing.

HPE continue to evolve from a product company to a solutions provider. In isolation Cloud Volumes is an interesting product, greater integration with the rest of the portfolio and predictive analytics which makes data placement recommendations could allow for a complete storage management strategy. Let’s see how this evolves.

Sponsored post – opinions are my own