3Par 101 – Part 2 – CPG’s


Previously I started my 3PAR 101 / beginners series with the post Meet Chunklet!, which has been one of the blogs most popular posts.  Now it’s time to move on with the series.

Part 1 of the series dealt with 3PAR’s unique approach to RAID and found that several logical layers were created to enable this. As a quick reminder; Vluns are a virtual volume which has been presented to a host, Virtual Volumes draw their space from CPG’s, CPG’s are a pool of Logical Disks and Logical disks are chunklets arranged as rows of RAID set. If you’re still a bit hazy on all this check out Part 1 of this 3PAR 101 series where we covered all this is in detail.

The 3PAR building blocks that we have direct control over are CPG’s VV’s and Vluns and will be what you mainly work with day to day, this 3PAR 101 series will cover each of these components in turn.

What are CPG’s

Without further ado lets crack on with looking at the first building block we will need to put our 3PAR to work, which is Common Provisioning Groups (CPG’s). A CPG is a pool of logical disks that virtual volumes will draw their space from. For simplicity if you have worked with other storage vendors you can think of a CPG as a RAID pool. However that is just to help to understand the concept, as a CPG also has many characteristics that make it different from a traditional RAID pool.

We know that CPG’s are a pool of space and the smallest building block that makes up that space is a chunklet. CPG’s effectively filter which chunklets are selected and how they are arranged, which in turn means CPG’s define performance and availability levels. There are three levels of availability port, magazine and cage. Port is the lowest level and protects against a backend port failure, magazine protects against the failure of a drive magazine and cage protects against an entire disk shelf failure.

Let’s look at a couple of examples below to help understand how CPG’s settings define performance and availability. CPG1 for example uses chunklets from FC disks, in a set size of 3 and availability of magazine. CPG2 is configured to use NL disks as RAID6 and a set size of 6 and an availability of cage. So CPG 1 will be higher performing but less available than CPG2.

CPG1 – FC, RAID 5, Set Size =3, availability= magazine

CPG2 – NL, RAID 6, Set Size =6, availability=cage

Upfront a CPG will consume no space it is only as virtual volumes demand space from it that it will autogrow. This is a big bonus as it eliminates the need to pre-allocate and pre-plan the size of the different CPG’s you need to create. Although with multiple CPG’s able to consume space from the same set of physical disks this does require that your capacity management needs to be more vigilant. Performance increases the more disks you stripe across, and so you will generally want to stripe all your data across all the available disks of that type in the system. Lastly you need to be aware that CPG’s will form the levels that the tiering software (AO) will work with, but more on that another time.


  • CPGs are a pool of storage that define performance and availability characteristics
  • Multiple CPG’s can exist on the same set of physical disks
  • CPG’s grow on demand and so no space needs to be pre-allocated
  • CPG’s should be striped across as many disks of the same type as possible to maximise performance
  • The availability options for a CPG are HA cage, magazine and port. The CPG will default to the highest level of availability.

Creating  a CPG –  SSMC

Enough theory let’s get on and create a CPG, first in the 3PAR SSMC GUI:

1 Open SSMC by opening a web browser and entering your SSMC web address in the following format:


Next logon with your credentials:

2 From the main menu choose Common Provisioning Groups from the Block Persona submenu.  If this is not visible chose show more on the right hand side of the main menu

3 Click the green Create CPG button on the top left of the screen

4 What appears next is the simple CPG creation screen.  The only information you have to supply is the name of the CPG, the rest will be set to default values.  Lets go through each setting incase you want to change any of them

  • Name – Try and call the CPG something logical for example CPG_FC_R6, so you can quickly see the disk and RAID type
  • System – If you are connected to multiple systems.  Chose the system you wish to perform the action on
  • Device Type – Chose the physical disk type you wish to use for your CPG, this will default to FC disks.
  • Availability – Sets the the availability level for the CPG.  This will default to the highest level of availability available for your system, so I would suggest leaving this setting as is. In this case it is set to Cage availability, so we can loose an entire disk shelf and not loose data

When you are happy with your settings chose create

5 If you want to create a simple CPG you are done.  But lets quickly look at what additional options ticking advanced options on the create CPG screen gives us:

  • Set size – The size of the RAID set.  In this example the set size is 6+2 meaning 2 parity drives to every 6 data drives.
  • Growth increment – The increment with which CPGs grow. Do not change this
  • Growth limit – Don’t set this.  Any limits set will act as a hard stop
  • Growth warning – An alert will be sent out when this level is reached

Creating a CPG  –  CLI

Creating a CPG in the CLI is arguably simpler since it’s all done in a one line command

To create a CPG with all default options; that is to use all FC drives, in RAID 1, with an availability of cage

createcpg cpgname

You will probably want to have more control over your CPG creation. For example the following creates a CPG that is RAID 6, cage level availability, set size 6, on NL disks with the name NL_Raid6

createcpg -t r6 -ha cage -ssz 6 -p -devtype NL NL_Raid6

Lets break down the CLI options a little

  • createcpg – core command
  • -t specify RAID level eg RAID 6FC_R6
  • -ha specify availability level eg cage
  • -ssz set size eg a set size of 4 for a RAID 5 set would be 3 data and 1 parity
  • -p –devtype specify the disk type eg NL for Near Line

If your down with the kids and use the modern management tools that’s it you can move onto Part 3 of this 3PAR 101 series which covers Virtual Volumes and Vluns. If you missed it, catch Part 1 when we talked about Chunklets and the fundamentals of 3PAR. If you still use the 3PAR Management Console read on:

Creating a CPG – 3PAR Management Console

1 In the management pane select Provisioning and then from the common actions pane select Create CPG


2 Next you will see a welcome screen which has alot of useful info on creating CPG’s, if you do not want to see this again click the skip this step tick box and click next

3 The basic information you will need to complete when creating a CPG is the name of it, the device type, RAID type and set size. Try and call it something logical, in this example I have called it a name that quickly tells me the disk and RAID type. Unless you have reason to change it leave the set size at default, increasing the set size will increase useable space but decrease performance

4 If you tick the advanced options box you will see some more options. I have highlighted in red the common values you are likely to want to change. Be cautious of changing advanced variables such as specifying fast / slow chunklets as you do not normally need to change these options. You will see the availability option, this will default to the safest option i.e. cage if it is available


5 The next window will only appear if you have ticked the advanced option and allows you to filter on the disks to include. We want to stripe across as many disks as possible so just click next here


6 The last window just confirms the setting you have chosen. Once happy here just click finish

Next time we will be creating a virtual volume and exporting it as a Vlun. If you missed it, catch fundamentals of 3PAR of this 3PAR beginners guide series and also Virtual Volumes and Vluns).

To stay in touch with more 3PAR news and tips connect with me on LinkedIn and Twitter.

Further reading

3PAR Concepts Guide

3PAR Best Practices





3Par 101 – Meet Chunklet!

I wanted to start a series of posts looking at 3PAR 101, a back to basics/ beginners guide to 3PAR. The perfect place to start is by looking at the 3PAR architecture, and specifically how 3PAR uses layers of abstraction to deliver a unique and flexible approach to RAID.


Introducing Chunklet!

Let’s introduce Chunklet, as its Chunklet that enables 3PAR to have a unique architecture and enable many of its capabilities.

a1 - meet chunklet

Traditional RAID

Things were not always so good for Chunklet, back in the day Chunklet was a bad dude with a bad attitude. He existed on a traditional storage array and since he didn’t get on with anyone he demanded a whole disk to himself. So on a traditional storage array to set up a simple RAID 5 2+1 set, you would need 3 disks, 2 data and 1 parity each of which was dedicated entirely to being part of that RAID set. This traditional and inflexible approach is shown in the diagram below:


 3Par Chunklet

Over time Chunklet mellowed out, he realised hey it’s good to share and instead of demanding a whole disk to himself he was happy with 1GB of any given disk. This is what happens in a 3PAR system, whenever a disk is added to the system it is divided into 1GB blocks of space called Chunklets. Prior to the 7000 and 10,000 series the chunklet size was 256MB, the reason for the increase to 1GB was to reflect the growing size of disks.



In a 3PAR system like in a traditional storage system RAID is used to combine multiple disks together but instead of using entire disks data is striped across chunklets. Let’s zoom in on a small number of disks to see how this looks.

3 logical disks -small

In this simple example above we see 3 physical disks each with 4 chunklets per disk, each different colour represents membership of a different RAID 2+1 set. For example there is a yellow RAID set made up of a data chunklet from the first 2 disks and a parity chunklet from the 3rd disk, real world there would be a lot more chunklets per disk i.e. a 450 GB disk would consist of 450 chunklets. To summarise in a traditional storage system each disk is dedicated to being a member of one RAID set, by using chunklets multiple RAID sets can co-exist on the same set of physical disks.


Looking at the diagram below we see chunklets enable not only multiple different RAID sets to exist on a single physical disk but that they can also have different RAID levels. The diagram show 4 physical disks with different coloured chunklets representing membership of different RAID sets. The orange and blue chunklets are members of a RAID 1 1+1 set co-existing alongside a RAID 5 2+1 (Green) set and a RAID 5 3+1 (yellow), all on the same physical disks.

4 logical disks - different raid types - small

 Logical Disks

To allow for large volumes of data and to enable the data to be striped across as many disks as possible, multiple RAID sets are combined together in rows. The number of RAID sets combined together is the row size, for example if the above orange and blue RAID 1 sets are combined together the row size would be 2. A logical disk is a collection of chunklets arranged as rows of a RAID set.



The next logical layer down is CPG’s (Common Provisioning Groups). CPG’s are simply a number of logical disks joined together to form a contiguous space. We will cover CPG’s in part 2 of this series in detail, but for now just know CPG’s are a pool of space. Whilst CPGs have many unique features for simplicity in traditional terminology you can think of a CPG as a RAID group.


Virtual Volumes

Finally we get to the building block we are able to assign to a host, Virtual Volumes. Virtual volumes (VV’s) draw their space from CPG’s and in the case of thin provisioned VV’s grow on demand and only consume space from the CPG as needed. In traditional terminology you can think of a virtual volume as a LUN.


Putting It All together

Let’s pull everything together we have looked at so far to see the complete picture – when a disk is added to the system it is subdivided into 1GB blocks called chunklets, these chunklets are arranged as rows of a RAID set to form logical disks. CPG’s (common provisioning groups) pool together the capacity of the logical disks. Virtual volumes then draw their space from the CPG and can be presented to hosts. The diagram below is taken from the HPE 3PAR best practices guide and again shows how all the different logical levels fit together but in a pictorial format.

5 layers

What have chunklets ever done for me?

Phew, that’s the hard bit done now let’s look at benefits chunklets give. The benefits in summary are flexibility and performance, let’s look at some examples:

  • Maximise utilisation – The same physical disks can provide many different RAID and availability options
  • Performance – data is distributed across many disks enabling wide striping and eliminating hot spots
  • Planning – No need to size and allocate space to RAID groups upfront. With 3PAR you can create CPG’s as required with no upfront space requirement and then space is only consumed as it is demanded.
  • Drive size flexibility – since RAID is striped at the chunklet level and not the entire disk different sized disks can be used
  • RAID flexibility – RAID levels can easily be changed
  • Disk failures – when a disk fails the spare chunklets are spread across many physical disks so it a one to many operation, allowing a quick rebuild

This is a complicated topic but hopefully this post has helped to gain a fundamental understanding of 3PAR architecture. Continue reading this 3PAR 101 guide:

3PAR 101 – Part 2 – CPG’s

3PAR 101 – Part 3 – Virtual Volumes and Vlun’s


Don’t forget to keep in touch on Twitter and LinkedIn.

Further Reading:

Hans De Leenheer

3PAR Concepts Guide