3Par 101 – Part 2 – CPG’s


Last year I started my 3Par 101 / beginners series with the post Meet Chunklet!, which went on to be one of the years most popular posts.  Now that we are in the new year I wanted to proceed with the next part of the series.

Part 1 of the series dealt with 3Par’s unique approach to RAID and found that several logical layers were created to enable this. As a quick reminder; Vluns are a virtual volume which has been presented to a host, Virtual Volumes draw their space from CPG’s, CPG’s are a pool of Logical Disks and Logical disks are chunklets arranged as rows of RAID set. If you’re still a bit hazy on all this check out part 1 of this 3Par 101 series where we covered all this is in detail.


The 3Par building blocks that we have direct control over are CPG’s VV’s and Vluns and will be what you mainly work with day to day, this 3Par 101 series will cover each of these components in turn.


What are CPG’s

Without further ado lets crack on with looking at the first building block we will need to put our 3Par to work, which is Common Provisioning Groups (CPG’s). A CPG is a pool of logical disks that virtual volumes will draw their space from. For simplicity if you have worked with other storage vendors you can think of a CPG as a RAID pool or in EVA terminology a disk group. However that is just to help to understand the concept, as a CPG also has many characteristics that make it different from a traditional RAID pool.


We know that CPG’s are a pool of space and the smallest building block that makes up that space is a chunklet. CPG’s effectively filter which chunklets are selected and how they are arranged, which in turn means CPG’s define performance and availability levels. There are three levels of availability port, magazine and cage. Port is the lowest level and protects against a backend port failure, magazine protects against the failure of a drive magazine and cage protects against an entire disk shelf failure.


Let’s look at a couple of examples below to help understand how CPG’s settings define performance and availability. CPG1 for example uses chunklets from FC disks, in a set size of 3 and availability of magazine. CPG2 is configured to use NL disks as RAID6 and a set size of 6 and an availability of cage. So CPG 1 will be higher performing but less available than CPG2.

CPG1 – FC, RAID 5, Set Size =3, availability= magazine

CPG2 – NL, RAID 6, Set Size =6, availability=cage


Upfront a CPG will consume no space it is only as virtual volumes demand space from it that it will autogrow. This is a big bonus as it eliminates the need to pre-allocate and pre-plan the size of the different CPG’s you need to create. Although with multiple CPG’s able to consume space from the same set of physical disks this does require that your capacity management needs to be more vigilant. Performance increases the more disks you stripe across, and so you will generally want to stripe all your data across all the available disks of that type in the system. Lastly you need to be aware that CPG’s will form the levels that the tiering software (AO) will work with, but more on that another time.



  • CPGs are a pool of storage that define performance and availability characteristics
  • Multiple CPG’s can exist on the same set of physical disks
  • CPG’s grow on demand and so no space needs to be pre-allocated
  • CPG’s should be striped across as many disks of the same type as possible to maximise performance
  • The availability options for a CPG are HA cage, magazine and port. The CPG will default to the highest level of availability.

Creating  a CPG – GUI

Enough theory let’s get on and create a CPG, first in the GUI:

1 In the management pane select Provisioning and then from the common actions pane select Create CPG


2 Next you will see a welcome screen which has alot of useful info on creating CPG’s, if you do not want to see this again click the skip this step tick box and click next


3 The basic information you will need to complete when creating a CPG is the name of it, the device type, RAID type and set size. Try and call it something logical, in this example I have called it a name that quickly tells me the disk and RAID type. Unless you have reason to change it leave the set size at default, increasing the set size will increase useable space but decrease performance


4 If you tick the advanced options box you will see some more options. I have highlighted in red the common values you are likely to want to change. Be cautious of changing advanced variables such as specifying fast / slow chunklets as you do not normally need to change these options. You will see the availability option, this will default to the safest option i.e. cage if it is available


5 The next window will only appear if you have ticked the advanced option and allows you to filter on the disks to include. We want to stripe across as many disks as possible so just click next here


6 The last window just confirms the setting you have chosen. Once happy here just click finish



Creating a CPG – CLI

Creating a CPG in the CLI is arguably simpler since it’s all done in a one line command

To create a CPG with all default options; that is to use all FC drives, in RAID 1, with an availability of cage

createcpg cpgname


You will probably want to have more control over your CPG creation. For example the following creates a CPG that is RAID 6, cage level availability, set size 6, on NL disks with the name NL_Raid6

createcpg -t r6 -ha cage -ssz 6 -p -devtype NL NL_Raid6

Lets break down the CLI options a little

  • createcpg – core command
  • -t specify RAID level eg RAID 6
  • -ha specify availability level eg cage
  • -ssz set size eg a set size of 4 for a RAID 5 set would be 3 data and 1 parity
  • -p –devtype specify the disk type eg NL for Near Line


Next time we will be creating a virtual volume and exporting it as a Vlun. If you missed it, catch Part 1 of this series and also Part 3.

To stay in touch with more 3PAR news and tips connect with me on LinkedIn and Twitter.





Published by

17 thoughts on “3Par 101 – Part 2 – CPG’s

  1. Dear Admin

    This is such a great sharing site. I really appreciate it so much.
    I have some questions which need your advise.
    1. I have 1 cage with two controller nodes and 3 additional cages. What CPG HA should I go with as the best practice ? Should I create CPG to have all the members from all phish am disks ?
    2. When powering on, should I power on controller nodes first or 3 additional cages first ?

    Best Regards

    1. 1 Yes create the CPG to use all available disks of the same type. You effectively have 4 cages so could use cage level HA if you created a RAID 5 3+1 CPG or a RAID 6 6+2 CPG if you wish to have a larger RAID set than this you would need to use the reduced availability level of magazine.

      2 Shelves then controllers

  2. Hey Dude. What do you think of 3PAR HA Cage? We are going to use the peer persistence replication to another active array, and we are contemplating HA cage. I am potentially building a 3par metro cluster, and my shelf count jumps from 4 to 8 pretty quick. Not sure if it is needed with peer persistence. thoughts?

    1. Hi

      I would see them as complementary to each other rather than an either or decision. HA Cage provides enhanced local availability and Peer Persistence is datacentre availability. I recommend to customers to buy as many cages upfront as they can afford, cages are relatively cheap in relation to the overall purchase price and then it allows you to optimise your disk layout from the start plus avoids a costly rebalancing exercise further down the line. Plus if you did have the extra cages you would then be able to have the cage HA.

      Hope that helps


  3. Hi,

    I have a 3 par 7200 with two cages. is it possible to set the cpg availability to cage level with just 2? or am I left with magazine level? it does default to cage level when setting up a new cpg.


    1. It depends what RAID level you are using. I assume when you say you have 2 cages, that’s 2 cages total including the controller. If this is the case you could use cage level availability with a RAID 1 CPG. To use RAID 5 or 6 with cage level availability you would need a minimum of 3 cages.

  4. Thanks for the quick reply. I had another look this morning and we have 2 controllers and 3 drive cages consisting of SSD, FC and NL disks. I am looking at RAID 5 and 6 which is currently set to magazine availability. I have basically taken this over and after reviewing have noticed that the setup is not right in my mind. Previous person has made the set size 4 + 2 on RAID 6 CPG and 5 + 1 on RAID 5 CPG and all set to magazine availability. I’m looking at a redesign of the storage for VMware as its grown by 50 servers this year.

    Thanks for the info

  5. I am fairly new to 3Par, just added some NL drives to my 2 cage configuration (controller plus additional cage) NL drives are online and state is normal but when I try to create a CPG it shows no NL space available

  6. Hey 3pardude Thank u for sharing this knowledge…
    Can u please share a link for Peer Persistent topic.

  7. In our set up I see Configured Availability as cage and Current Availability as Cage. Is that because it was originally configured as Cage and then was later changed?

    If so why not show me on the current availability, the one that it was later tuned to. Unless I guess its indicating that older data on this volume has a different configuration and all new data will have the “current availability.

    1. This is basically showing that the requested level of availability is being met i.e. you don’t have a capacity or other issue preventing you from delivering your requested availability level

Leave a Reply

Your email address will not be published. Required fields are marked *