HPE Cloud Volumes and Multi-Cloud

I recently attended a briefing from HPE on Cloud Volumes. Cloud Volumes is a technology that already existed within the Nimble portfolio before the HPE acquisition. 2 years on from becoming part of HPE what exactly is Cloud Volumes? What is it  useful for and how does it fit within HPE storage and cloud strategy?

You bring the compute, I’ll bring the storage

barbecue

I love a barbecue party with a group of friends, especially when everyone brings a bit of their own food. You never know what you’re going to get and hopefully you end up with a better mix of food than if you provide it all yourself. I always bring the crisps because it’s easy.

HPE Cloud Volumes  offers  to bring block storage to the cloud party, with the compute being provided by one of the major cloud providers; Google, AWS or Azure. We will deal with the full process in detail later, but in short you create a Cloud Volumes account through a web portal, create the VM with the cloud provider, then select HPE Cloud Volumes as the disk target.

HPE provides the storage by either co-locating or locating very near to the cloud hyperscalers data centres in order to minimise latency. The HPE data centres contain HPE Nimble storage arrays which provides the underlying storage presented through to Cloud Volumes. HPE use their own InfoSight system to ensure availability, capacity and scale are met through its predictive analytics system.

There is a choice between all flash or hybrid disks, both of which are able to offer the same level of IOPS. But the IOPS guarantee differs between them, 95% percent for hybrid and 99% for flash. Cloud Volumes is available globally but the guarantee will only exist in regions where HPE data centres exist.

Why?

The first question many people have is why don’t I just use the storage natively available from the cloud provider? Cloud Volumes was initially born out of the desire to provide block storage to more traditional applications. Many cloud native apps were designed to take advantage of cloud storage such as using object based S3 or Glacier for archive. Those born in the cloud type apps would receive little benefit from Cloud Volumes. Although container based applications can take advantage of the persistent storage offered by Cloud Volumes. Currently Docker, Kubernetes and Mesosphere are supported.

Traditional applications such as Oracle and SQL grew up in an on-premises environment traditionally based on block storage. Cloud Volumes was initially designed to provide the availability performance and features these applications require.  It’s fair to say Cloud Volumes has evolved since the initial vision to be a more complete offering for an enterprise’s cloud strategy. Let’s look at some of the ways HPE envisage this:

Data mobility

Cloud lock-in is a concern of many companies. Cloud providers do not natively provide the tools to easily move data to another provider. Plus there is the challenge of data gravity, it is a physical challenge to move large volumes of data this can make operating a multi-cloud strategy a challenge.

With Cloud Volumes, since your compute and storage are provided separately, the compute provider can easily be changed.  It is just a case of spinning up a new VM and pointing it back at the Cloud Volume. This data mobility allows the right data to be in the right place at the right time and allows these changes to be made on the fly.  This ability to provision storage to multiple destinations on the fly could facilitate cloud bursting, allowing for peaks in demand.

Cloud migration

All Nimble systems natively support replication to Cloud Volumes. This allows for a very simple method for cloud migration and the ability to bring back on prem if necessary.

Cost

One of the barriers to data movement with the current cloud providers is cost, where there are egress charges to pay for pulling data back from a cloud provider. HPE has no cost associated with any movement of data, be that to the cloud, back from the cloud or indeed to completely exit the service.

Data services

Cloud Volumes is able to provide instant snapshots and clones. It uses snapshot technology which only records changed blocks, so you are able to store a large number of clones and snapshots whilst only paying for the changed data.

Secure 256-bit AES encryption is also available.

DR and test/dev

Cloud Volumes could easily be integrated within DR or test/dev solution.  Snapshots taken on site can be replicated to the cloud for further testing or DR availability.

Visibility

InfoSight is able to track and monitor not only your local Nimble arrays but also Cloud Volumes. Allowing a simple single place to predict and manage storage

Cloud Volumes Walkthrough

Setting up a Cloud Volume is a  two-step process, firstly create the Cloud Volume then secondly attach it to a VM.  Let’s take a look at the process:

Creating a Cloud Volume 

  • Open the Cloud volumes website
  • Choose to create a new cloud volume then you need to run through a few settings:

create cloud volume 1

    • Cloud provider –  choose cloud provider, region and cloud network
    • Volume settings – name the volume, choose the required size, performance in terms of IOPS and hybrid or flash
    • Protection – Select to enable a snapshot schedule or encryption

create cloud volume 2

Attach to a VM

  • In the Cloud volumes website choose to attach to a VM.  This will then bring up a script you can copy
  • In the cloud VM run the script and this will complete all the operations to attach the Cloud Volume to the VM for you

cloud volumes connection script

Once the volume is attached you would treat like any other kind of new disk and would proceed to bring it online and format it etc

Take a look at this complete walk through video

Final Thoughts

Today Cloud Volumes provides an interesting data services and data mobility play which could offer organisations with a hybrid cloud or multi cloud policy greater flexibility.  Using native cloud storage requires organisations to go “all in” committing data with a given cloud provider due to the pull of data gravity. Cloud Volumes removes that major decision point of where data is stored since it can be easily re-targeted, be that due to a change in cloud provider or simply an operational need i.e. cloud bursting or testing.

HPE continue to evolve from a product company to a solutions provider. In isolation Cloud Volumes is an interesting product, greater integration with the rest of the portfolio and predictive analytics which makes data placement recommendations could allow for a complete storage management strategy. Let’s see how this evolves.

Sponsored post – opinions are my own

iLO service port

HPE iLO Service port – How to use it?

The HPE ProLiant Gen10 Server has iLO 5 which now has a new iLO USB port on the front. This new ILO 5 service port has some useful features and use cases which guest blogger Armin is Kerl going to be showing you how to use. Your can learn more about Armin in the guest blogger hall of fame.

iLO service port

What is The HPE iLO 5 Service Port?

The iLO service port can be used for:

  • Downloading the Active Health System Log to a supported USB flash drive.
  • Connecting a client (such as a laptop) with a supported USB to a Ethernet adapter in order to access the iLO web interface, remote console, CLI, iLO RESTful API, or scripts.

How to use the iLO 5 Service Port

Getting connected is a simple two step process:

  1. Use a supported USB to Ethernet adapter to connect a client laptop to the Service Port (the USB port labeled iLO, on the front of the server).

The iLO Service Port supports USB Ethernet adapters that contain one of the following chips by ASIX Electronics Corporation: AX88772, AX88772A, AX88772B, AX88772C. Hewlett Packard Enterprise recommends the HPE USB to ethernet adapter with part number: Q7Y55A

In this example I am using this No-Name Adapter

USB 2 LAN Dongle

  1. Connect to iLO through a browser by using this fixed IPv4 address: 169.254.1.2.
    (The Client will get a DHCP IP Address from the iLO.)

iLO Login page

After the Login, we see the Standard iLO Menu:

iLO Menu

Also accessing the iLO Remote Console is possible:

iLO Remote Console

Here is my Laptop with attached ProLiant Server:

Laptop connected to ILO 5 service port

When to use the service port

I see two main use cases.

  1. Setup of a new Server in the lab
    In the past we connected the PC by the iLO LAN Port.
    However, there are problems with this approach:
    – If we use our company LAN, there is DHCP but then I cannot configure the customer IP.
    – If we use the customer IP, there is no DHCP and we need to attach monitor/keyboard first.
    Now I am able to patch it to my PC, simply connect to the fixed IP Address and can configure the Server iLO with Customer IP Address.
  2. At a Customer Site
    Most Customers are no longer using KVM Switches and Consoles, they use iLO for Remote access. But if iLO connection is not possible (unknown IP, not cabled), they have to attach a local Monitor and Keyboard/Mouse. Now we can simply plug in the USB2LAN Adapter and connect a Laptop.

My Enhancement

I tried to connect the USB2LAN Adapter via a Nano Wi-Fi-Access Point.

This was the particular model:

Nano Router

Here is the Nano Router config:

WiFi information

WiFi IP address

iLO 5 with WiFi

Now I am able to connect to the server without any cables by using Wi-Fi.
This works not only in my lab but in the workplace. 🙂

 

 

 

Veeam & d8taDude Webinar logo

Join me for a live webinar – Optimizing HPE & Veeam for the AlwaysOn Enterprise

Veeam & d8taDude Webinar logo

I am working with Veeam to present a live Webinar next week titled Optimizing HPE & Veeam for the AlwaysOn Enterprise. In which I am going to cover my top tips for getting the best performance from your 3PAR systems in conjunction with Veeam covering how HPE and Veeam work together.

On the 3PAR side you will be able to learn about performance optimisation under the following areas.

  • Choosing the 3PAR model for your needs
  • Hardware layout best practices
  • Getting the most form the available performance enhancing tools  – Adaptive Optimization, Dynamic Optimization, Adaptive Flash Cache
  • Hardware upgrade considerations
  • Plus other performance tips

Russell Noalan will be joining me from Veeam to cover:

  • How Veeam and HPE are better together
  • How Veeam can improve backup performance in HPE storage infrastructures
  • What’s coming in v10 of Veeam Backup & Replication

It really does look like an interesting session, I definitely want to see whats coming in Veeam V10 so please do join us by registering for the webinar. See you there!