3PAR storage class memory Intel Optane

Discover Storage News Wrap Up

Last week at HPE Discover there was a series of storage announcements for the 3PAR and Nimble platform plus more. In this post we will run through the announcements.

Storage Class Memory

3PAR storage class memory Intel Optane

Storage class memory has been announced for 3PAR and Nimble. Storage Class Memory memory (SCM) was intended to bridge the gap between DRAM and NAND flash found in current generation SSD’s. DRAM is fast memory but does not scale and is expensive, NAND flash does scale but does not deliver the same performance. Intel have developed their Optane product which sits in this middle ground of performance and cost between SSD and NAND flash.

The Intel Optane device will be NVMe connected and act as an extension to the onboard controller cache. Other vendors such as Pure have added NVMe drives into the system but HPE have focused on adding the NVMe storage within the controller as they believe that the key bottleneck in flash systems is at the controller level and not disk. The additional cache creates quicker response times and also reduces the  load on back end disks as the amount of IO served served here is reduced. Phillip Sellers has written an in-depth piece looking at the addition of SCM to Nimble and 3PAR.

InfoSight

There was a number of announcements around the InfoSight platform.  First up was 3PAR performance insights which allows a performance view of not only your storage but also the VMware layer. We’ve discussed this in detail previously including how to implement it, but in short this is free for customers and allows deep insights into your VMware environment such as being able to find VMs with the highest resource utilisation.

The Nimble storage platform also benefited from a couple of InfoSight enhancements. Nimble Resource planner allows load modelling. As a storage admin I really like the sound of this as so often we don’t know what the impact of an additional workload will be until it’s deployed. The modelling software predicts the impact both in terms of performance and capacity, below is an example of running through this

Choosing workload to model

Nimble resource planner1

Impact on capacity is modelled 

Nimble resource planner 2

Also modelled is the impact on CPU and cache

Nimble resource planner 4

Nimble cross stack analytics brings enhancements to the performance recommendations already delivered by InfoSight. The focus of this new feature is bring specific AI driven recommendations to the environment. An example of the type of recommendation is shown in the screenshot below

Nimble cross stack analytics

The InfoSight product is still owned by the storage segment of HPE but its expansion into other products is continuing at a pace. One of HPE’s key objective is to deliver a cloud like experience and make things consumable as services. InfoSight is a key part of this strategy ensuring uptime and proactive recommendations to the environment to eliminate issues before they occur.

You can see a summary of the InfoSight announcements in this chalk talk

Nimble Peer Persistence

Synchronous replication was added as a feature to the Nimble storage platform. The feature is being called Peer Persistence since HPE are also introducing support for metro clusters at the same time. I have covered Peer Persistence in-depth previously for 3PAR, it essentially allows for the creation of stretched cluster across geographically separate data centres. The Nimble implementation of Peer Persistence will initially support metro clusters for SQL and VMware.

Cloud Volumes Enhancements

Nimble Cloud Volumes were announced a couple of years ago. These cloud volumes offer block storage with the maturity and depth of services you would expect to see on premise for cloud. Announced this time round was the support of containerized workloads using both Docker and Kubernetes.  Also announced was extended regional support so cloud volumes will be available for the UK and Ireland in 2019.

Cohesity

HPE have also enhanced their relationship with secondary data platform Cohesity. Cohesity is available bundled with Proliant Gen10 servers through HPE partners.  These validated Cohesity and HPE bundles have added support for HPE Apollo and HPE DL380 servers

You can see a further summary of announcements in this blog post from Calvin Zito

 

HPE Discover Madrid

HPE Discover this week!

 

HPE Discover Madrid

This week sees the HPE Discover Event take place in Madrid, Spain. If the event is new to you can think of it as HPE’s largest annual trade show. This is generally where all the big announcements are made. I am lucky enough to be attending as a guest blogger so will be looking to get more info on the storage and all other news.

Ways to follow along

  • If you are attending be sure to attend the spotlight storage session – Intelligent storage has arrived. Unlock insights from your data. You can see further storage sessions recommended on the Around the Storage Block blog
  • On social media using the #HPEDiscover tag
  • You can see the keynotes broadcast live at the HPE Discover home page
  • You can follow the other independent bloggers using this Twitter list and follow the chat online using #hpediscover
  • Of course you can follow along on d8tadude.com, and subscribe to e-mail updates
  • Also make sure to subscribe to my YouTube channel, where I will be chatting to those in the know and bringing the latest news
esxi health status displayed in VxRail manager

VxRail in a nutshell

Dell EMC’s VxRail is selling like hotcakes,  I was lucky enough recently to attend a one day overview session on the product.  The training was really good and I wanted to share what I had learnt to present a high level overview on what VxRail is about.

What is VxRail?

VxRail is Dell EMC’s hyper-converged offering. Hyper-converged appliances allow a datacenter in a box approach,  rather than buying servers, storage and hypervisor separately hyper-converged appliances bundle the components into one package.  The storage, compute and hypervisor components used by VXRail are as follows:

  • Storage – VMware vSAN 6.6
  • Compute – 14th Generation DellEMC PowerEdge servers
  • Hypervisor – vSphere 6.5

Together the above form VXRail 4.5

What else do you get with VXRail?

You also get some other software bundled in:

  • vCentre
  • vRealize log insight
  • RecoverPoint for VM’s
  • vSphere Replication
  • vSphere data protection

It is worth noting the ESXi licenses are not included

What is VxRack?

You may also hear VxRack mentioned, this is the bigger brother to VxRail with the software defined storage provided by ScaleIO.  Networking using VMware NSX is also an option.

How many nodes do you need?

The minimum number of nodes required is three,  although 4 is recommended to allow overhead for failures and maintenance.   This is also the minimum number of nodes required to use erasure coding rather than mirroring for data protection.

How do you manage VxRail?

The system is managed from two places.  You will be spending most of your time in the vSphere web console since all vSphere management is still performed from here. Also since the storage is being provided by vSAN this is also managed within vSphere.  The second tool you will need to become familiar with is VxRail Manager.

vSphere Integration

There is the option to create a new vCentre which will be housed within the hyper-converged infrastructure it’s self or to use an external vCentre.  The choice to use an internal or external vCentre can be set during the initial deployment wizard.

What is VxRail Manager?

The VxRail manager allows you to manage the hardware i.e. the servers in a VxRail deployment plus perform a number of other tasks including:

  • Deployment – Initial deployment and addition of nodes
  • Update – As this is a hyper-converged system upgrade of all components can be completed from the VxRail manager.
  • Monitor – Check for events, and monitor resource usage and component status
  • Maintain – Dial home support

The following shows a number of screenshots from within VxRail Manager

Physical node view

vxrail physical health

Logical node view showing resource useage

vxrail logical health

ESXi component status

esxi health status displayed in VxRail manager

What are the models?

You can see detailed specifications of all the models on the DellEMC site, this section just provides the major differences between the series

  • S Series – Hybrid only
  • G Series – Space saving format 2U can contain 4 appliances
  • E Series – 1U hence supports less capacity than the other 2U nodes
  • P Series –  2U supports twice the capacity of E series and is therefore suitable to more demanding workloads
  • V Series – Same spec as P series plus GPU’s. Optimised for VDI environments

How do you add a node

You add a node using the VxRail Manager as shown in the screenshot below which is a non disruptive process. Hybrid and all flash models cannot be mixed plus the first three nodes in any cluster must be identical after this you can mix and match models . Although there is something to be said for maintaining consistency across the cluster so it is balanced and will probably make managing it easier in the future. The cluster can be scaled to a max of 64 nodes.

Adding node to VxRail via VxRail Manager

How do I take a node out for maintenance?

Since the data is stored inside each of the nodes there are some additional considerations when putting a node into maintenance versus using a traditional SAN.  When you do put a host into maintenance mode the default option is ensure accessibility it makes sure all data is available during maintenance, although redundancy may be reduced.

vSAN summary

vSAN is VMware’s software defined storage solution, this means no specialist storage equipment is required and storage is provided by conglomerating the storage within each of the ESXi servers. Key concepts to understand

  • Management  – vSAN is managed from within vSphere and is enabled at the cluster level  
  • Disk groups – Each disk group consists of a caching disk which must be an SSD disk and 1-7 capacity drives.  The capacity drives can be flash drives or spinning disks in a hybrid setup. All writes are made via the caching disk, 30% of its capacity is also reserved to act as a read cache.
  • vSAN datastore – Disk groups are combined together to create a single usable pool of storage called a vSAN datastore
  • Policy Driven – Different performance and availability characteristics can be applied at a per disk level on VMs

vSAN availability and data reduction

  • Dedupe and compression –  Enabled at the cluster level, not suitable for all workloads. If you have a requirement for workloads that do do not require dedupe/compression you would need multiple clusters
  • Availability –
    • Availability and performance levels are set by creating policies, you can have multiple policies on a single vSAN datastore
    • Availability is defined in the policy setting fault tolerance method, the available choices are RAID-1 (Mirroring) and RAID-5/6 (Erasure Coding)
    • RAID 1 is more suited to high performance workloads and will ensure there are two copies of the data across nodes
    • RAID 5 – Stripes the data across nodes, more space efficient but reduced performance.