EVA Disk Load

Today I went old skool and had to load a few disks into an old EVA.  The disk placement rules are similarish to 3Par.  They were:

  • Disk drives should be installed in vertical columns within the disk enclosures
  • Add drives vertically in multiples of eight, completely filling columns if possible
  • Keep the number of disks in each enclosure even. The number of disks in an enclosure should not vary by more than one
  • When adding disks, add a disk and wait for it to stop flashing (up to 90 secs) before load of next one
  • Install FC and SATA disks in separate groups

Full details are in the EVA user guide

Disk load and Tunesys

One of our 3Par system was becoming full, in terms of capacity so it was time to add some new disks. The system had some empty slots in the already existing cages, so the plan was to fill all the empty slots with new disks.

 

The 7400 system is configured with 2 large form factor shelves and 5 small form factor. The large form factor shelves currently contain 3TB SATA 7K disks. The additional NL drives would be of exactly the same size and speed, HP however confirmed we would be able to add a higher capacity 850GB 10K SAS into our existing FC CPG.

 

The procedure and best practice for adding new drives is covered in The HP 3PAR StoreServ 7000 Storage and Installation Guide. The physical installation of the drives is covered on P26 of the guide, it gave no mention of how many disks to load at a time so I took a cautious approach of putting one in and took my time unwrapping the next one, not a bad way to spend a Friday afternoon!

 

The HP 3PAR StoreServ 7000 Storage and Installation Guide contains the best practice on how the drives should be arranged. The key things where

  • The drives should be balanced across cages
  • For a node or M6710 Drive Enclosure (SFF), drives must be added in identical pairs starting from slot 0 on the left and filling to the right, leaving no empty slots between drives
  • For the LFF cages (M6720) NL and SSD’s must not be mixed in the same columns. Drives must be installed in identical pairs starting at the bottom left and no spaces must be left

 

Once all the disk are in the system if you have exceeded the number of disks you are already licenced for you will need to install additional disk licences with a setlicence command. You will then need to issue an admitpd to get the system to recognise the new disks. Once you have ran admitpd you should see your disks as available for use. As my CPG’s were set to use all available FC and NL disks they were added as a available to the CPG with no further action required.

 

The next step is to rebalance the system, as the new disks will be almost empty whilst your pre-existing disks hold all your data. To rectify this you need to run tunesys which is essentially a system rebalancing tool. Tunesys runs through 3 separate stages in order to rebalance the system these are:

 

Phase 1 – Analyzes and rebalances virtual volume capacity imbalance between nodes (inter-node tuning).

Phase 2 – Analyzes and rebalances space usage between disks associated with each node (intra-node tuning).

Phase 3 – LD re-layout tunes. LDs where the current characteristics differ from the associated

 

So in simple terms, phase 1 is useful when adding additional nodes, phase 3 is useful when amending CPG characteristics and of most interest in this case was phase 2 to balance the chunklets across the new disks added to the system. To see what the current space distribution was I ran the following command to see the space layout for NL and FC disks respectively

3PARSAN1 cli% showpd -space -p -devtype NL

3PARSAN1 cli% showpd -space -p -devtype FC

In my case it was imbalanced so I next ran tunesys with the dry run switch to see exactly what tuning would be performed without actually going ahead and doing it.

3PARSAN1 cli% tunesys –dr

Are you sure you want to tune this storage system?

select y=yes n=no: y

*

*********************************************************

** Storage system tuning started

*********************************************************

*

tunesys -dr

Parameter summary:

——————

Task ID             = none

Nodepct             = 3

Chunkpct             = 10

Diskpct             = 10

Maxtasks             = 2

Dry run flag         = 1

Waittask mode       = 0

Clean timeout       = 120

System chunklet size = 1024

cpgs                 = ALL

*

*********************************************************

* PD Analysis

*********************************************************

*

——FC——-

—————–Description—————— 15K 10K 10K+15K   NL SSD All

Number of Available PDs with free chunklets   0 144     144   48   0 192

Number of Available PDs with no free chunklets   0   0       0   0   0   0

—————————————————————————-

Total Number of Available PDs   0 144     144   48   0 192

—————————————————————————-

Maximum number free chunklets in a PD   – 682       – 2666   – 2666

Minimum number free chunklets in a PD   – 20       – 363   –   20

*

*********************************************************

* PD Node Balance Summary – Device type usage across nodes

*********************************************************

*

*

*

*********************************************************

* Inter-node device type average summary

*********************************************************

*

Device type: FC10 Average Usage per node “72.00”% threshold “75.00”%

Device type: NL7 Average Usage per node “48.00”% threshold “51.00”%

——————–Node Disk availability & percentage use——————–

—-0—- —-1—- —-2—- —-3—- —-4—- —-5—- —-6—- —-7—-

Devtype Dsks Usd% Dsks Usd% Dsks Usd% Dsks Usd% Dsks Usd% Dsks Usd% Dsks Usd% Dsks Usd%

—————————————————————————————

FC10   72 74%   72 71%   0 0.0%   0 0.0%   0 0.0%   0 0.0%   0 0.0%   0 0.0%

NL7   24 49%   24 47%   0 0.0%   0 0.0%   0 0.0%   0 0.0%   0 0.0%   0 0.0%

—————————————————————————————

*

*********************************************************

* Phase 1 skipped

* No inter-node tunes need to be run at this time.

*********************************************************

*

*

*********************************************************

* Phase 2: Performing Intra-node balance checks:

*********************************************************

*

*

*********************************************************

* Node 0 has 19 underallocated PDs and will be tuned

*********************************************************

*

*

*********************************************************

* Node 1 has 17 underallocated PDs and will be tuned

*********************************************************

*

*

*********************************************************

* Dry run – The following ND tunes would be

* performed:

*********************************************************

*

tunenodech -f -nocheck -dr -maxchunk 2 -chunkpct 10 -node 0

tunenodech -f -nocheck -dr -maxchunk 2 -chunkpct 10 -node 1

*

*********************************************************

* Phase 3 skipped – No LD re-layout tunes needed.

*********************************************************

*

*

*********************************************************

* Dry Run completed

*********************************************************

*

Number of tunes suggested: 2

I was happy with the output of the dry run so went ahead and ran the command:

3PARSAN1 cli% tunesys

 

You can monitor progress at a high level with showtask

You can drill into the tunesys task in more detail as below:

 

3PARSAN1 cli% showtask -d taskID

 

You will also see background tasks listed for each phase, for example

9284 background_command tunenodech       active   1/2 501/509 2

 

You can then again drill deeper into this, for example: 3PARSAN1 cli% showtask -d 9284

 

Don’t expect it to be quick, my case it took over a week to complete.  So go and make yourself a cup of tea, a big one!