SSMC 3.1 What’s New

SSMC 3.1 is now available for download from the from the software depot, this post will have a quick look at what’s new. The install process remains as before and I have covered this in detail previously. Firstly the SSMC 3.1 release supports 3PAR OS 3.3.1 and hence a number of the enhancements relate to the new functionality in 3.3.1.

Some of the key new features include:

Block Provisioning:

  • Removes the option for the volume type TDVV (Thin Deduped Virtual Volume). This is now depreciated in 3.3.1 and become a property of a volume to enable dedupe. Covered previously in my dedupe deep dive.
  • Support for compression, which also becomes a volume property available to SSD’s in 3.3.1
  • Support for monster 15TB drives

File Persona:

  • Ability to create FTP shares
  • Support for POSIX security modes i.e. Linux based security permissions

Remote Copy:

  • 3DC Peer Persistence configurations another 3.3.1 is added to SSMC as well as support for synchronous long distance setup

HPE 3PAR Storage Federation:

  • Allows the creation of a compressed volumes on the target system
  • Ability to schedule Peer Motion migrations
  • Adds support for migrating from legacy HPE 3PAR F-Class,T-Class and non 3PAR system to 3PAR i.e. online import facility

System Reporter:

Lots of enhancements in this area –

  • Ability to edit threshold alerts on systems running HPE 3PAR OS 3.3.1 and later
  • Support for SSMC and SMTP server running IPv6
  • Shows compression stats on capacity reports
  • Space forecasting is available on system and CPG capacity

You can see the full list of enhancements here.

 

 

Pivot3 Announces Acuity

Last week Pivot3 announced a new generation of its hyper-converged models. I took interest as I had noted this company was experiencing significant growth and wanted to see what they were all about. When I dug into it I uncovered an interesting story of product acquisitions and then the evolution of product integrations, definitely interesting as we see HPE’s Simplivity and Nimble acquisition evolve.

Doing Hyper-Converged before it was cool

Let’s start at the beginning Pivot3 are arguably one of the originators of the hyper-converged scene founded back in 2003, before it was even called hyper-converged. If you’re new to hyper-converged read storage and virtualisation in a box. Similar to how other hyper-covered platforms work each node has a VM that sits on the ESXi host and acts as the gateway to the storage. The storage across the nodes is grouped together to form a single pool of storage from which datastores can be carved. Pivot3 have until now called their OS vSTAC OS and the data is written across the nodes using erasure coding.

NexGen Acquisition

NexGen had a colourful history having been co-founded by one of the founders of LeftHand networks, later being bought by Fusion-IO which itself was subsequently bought by SAN disk. It was then detached from SAN Disk and acquired by Pivot3 in January 2016, simple right.

Prioritising the important stuff

The interest from Pivot3 in NexGen was in its PCIe flash and QoS functionality. QoS in a storage system, like its better known counterpart in networking, assures performance levels to certain workloads even when a system is busy. Initially Pivot3 sold the NexGen system in two ways as a standalone unit and secondly as a package with a Pivot3 hyper-converged system, they called this second approach vSTAC SLX. Physically vSTAC SLX consisted of a Pivot3 hyper-converged cluster connected to a NexGen PCIe flash array. This essentially allowed data to be tiered between the Pivot3 nodes and the NexGen layer for higher performance. The QoS software could control the placement of data dependent on the performance requirements.

The aim here was clear to present hyper-converged as a system that can run multiple workloads concurrently by assuring service levels with QoS. This would help to broaden the appeal of hyper-converged systems which today are predominantly focused on single use cases.

Acuity Released

The latest release from Pivot3 drops the vSTAC name and becomes Acuity, it builds on the fundamental ideas laid out in the SLX release but rather than being a NexGen + Pivot3 bundle the QoS and PCIe flash functionality is available natively within the Pivot3 nodes. It would appear they have managed to pull the smarts from the NexGen box and package it within the Pivot3 nodes.

The line-up of available nodes is shown in the graphic below. As before there is the choice of all flash or hybrid nodes.  But additionally now there are the accelerator nodes which are essentially the standard flash and hybrid models with an additional NVMe layer to give a performance boost.Those with the NVMe functionality can offer up to 3.2TB of NVMe flash per node, which Pivot3 advise will be 450% faster that SATA SSD and 119% faster than SAS SSD.

The QoS targets can be set to control minimum IOPs, minimum throughput and latency. These parameters are set through polices which can also be scheduled to change for example for month end reporting.

To stay up to date with all the latest and greatest news from the storage industry make sure you are subscribed to our e-mail list.

 

 

Robocopy File Migration

Introduction

Migrating storage data has never been so easy with methods like, vMotion and 3PAR’s online import tool. I recently had to do a NAS data transfer from an old to new device and neither of the above options were available and neither was Rsync, so I had to fail back to good old Robocopy. It certainly isn’t the fastest method but when you are short on options it is a useful one. There are lots of switches you can use so I wanted to look at the most common ones and some gotchas.

Useage

Robocopy is short for robust file copy and is included with Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows Server 2012, Windows 8

The basic usage is

robocopy \\source server\source path \\destination server\destination path <switches>

Here are some of the most common switches:

/e Copy subdirectories including empty ones

/zb Enables restartable mode. This is useful if the link you are copying across is unreliable as it will retry the file when the connection is established again. It also allows access to files using backup mode you be otherwise be unable to access

/copy:all Copy all the properties of the file

/copy:DATSOU Like above allows you to copy the properties of the file. But this allows the granular selection of which properties to copy

D Data

A Attributes

T Time stamps

S NTFS access control list (ACL)

O Owner information

U Auditing information

The default value for CopyFlags is DAT (data, attributes, and time stamps).

/log:c:\Robocopy.log Output the results to a log file

/log:c:\Robocopy.log /V /NPCommands can be used in conjunction with the log command to specify what exactly is logged. For example the following writes the results to a log file in verbose mode and does not show the percentage progress

Gotchas

/r:1 Specifies how many times copying a file will be retried. Setting this to something sensible is key or it will default to 1 million

/t:1 Specifies in seconds time to wait between retries. Again worth setting or it will default to 30 seconds

You can see the full list of switches here

Putting it all together

robocopy \\Server1\Y$ \\Server2\Y$ /e /zb /copy:DAT /r:1 /w:1 /log:c:\Scripts\Robocopy.log /V /NP

The above command will copy the data from \\Server1\Y$ to \\Server2\Y$ . Copying subfolders including empty ones, in restartable mode, copying the date, time and file properties. A retry and wait of 1 second each and outputting in verbose mode with no progress percentage to c:\Scripts\Robocopy.log

If you are using Robocopy to migrate data you can run it several times to pick up new files or modified files. A standard strategy would be to run it several times to copy as many files as possible, then to have an outage for the final copy to make sure that you have everything.