HPE Primera

HPE Discover – Multiple new storage systems unveiled

Today at HPE Discover a large number of storage announcements where made, including some new product lines.

HPE Primera

HPE Primera

This is a brand new storage system from HPE aimed at the high end. I asked the product manager about the heritage of the system, if it was 3PAR or Nimble based and they advised that it was a new from the ground up system combining the best of both. Primera starts in a two node configuration available as all flash and hybrid, it can scale to 4 nodes capable of delivering 1.5 million IOPs. The influence from the 3PAR and Nimble family is obvious in the feature set and key capabilities being marketed.

Like 3PAR today the Primera system is based on an active active design with all data striped across all components in order to maximise performance and resiliency.

InfoSight

InfoSight which was originally developed by Nimble is used extensively in conjunction with Primera. InfoSight is described by HPE as providing global intelligence “Powered with the most advanced AI for infrastructure”.  This is essentially describing that InfoSight will be gathering data from the entire install base and using machine learning to identify any potential issues or improvements and inform users.  HPE estimate that 90% of issues are non-storage related, InfoSight is able to discover potential issues beyond the storage layer to enable a complete wrap around to infrastructure services.

HPE Primera features

Primera is powered by a modular OS which means that software components of the system are decoup.  Today planning an upgrade to a storage system is a major undertaking with an all or nothing decision of either do or don’t upgrade.  The new modular design has decoupled data services allowing individual services to be updated. This new design should allow safer updates with fewer reboots and allow a more agile route to updates and deployment which should in turn enhance innovation and willingness to apply updates.

Possibly the most surprising aspect of the new system is that HPE are guaranteeing 100% availability.  This guarantee is available on all Primera systems without qualification, it seems this is enabled not only by HPE’s confidence in the new system but again because of the Intelligence that InfoSight will give them. This will ensure that customers systems are configured are correctly , any issues are remediated before they occur and if appropriate fixes rolled out to other customers.  Primera has been developed to be simple even though it is an advanced high end system with time to deployment estimated at 20 minutes.

SimpliVity Updates

The SimpliVity hyper-converged platform received some new models and enhancements also

  • InfoSight – Now works with SimpliVity. Similar to other implementations this make proactive health recommendations and other insights beyond the storage layer for example into noisy VMs
  • Archive node – This is a new 2U hardware offering that contains a mixture of HD and SSD disks. The node is intended as the name suggests for longer term storage
  • SimpliVity 325 – This is a new model in the range that takes on a 1U form factor. This dense format is intended for environments where space is at a premium

Nimble Storage dHCI

This is again a brand new system from HPE, it is a Hyper-converged product that enables you to scale storage and compute separately. The system is designed for simplicity in 5 clicks the entire stack is setup and ready for vCenter.

Simplicity of HCI flexibility of converged

I think this latest batch of updates show a significant engineering effort and understanding towards the requirements of todays datacentre customers who want flexibility, simplicity and performance.

Stay tuned for more information on the new announcements. Any questions let me know and I will try to find out the answer.

Further Reading

Primera – HPE Storage Blog

HPE Storage dHCI – HPE Storage Blog

Meet HPE Primera

Video – Primera Chalk Talk

Veeam sizing session from Veeamon

Veeam Infrastructure Sizing

One of the key decision you have to make in planning a new Veeam deployment is in sizing the infrastructure. Correctly sizing the infrastructure to ensure that each of the Veeam infrastructure components servers has sufficient resources to perform its role will allow an optimally configured and trouble free environment. During VeeamON I attended a session on Veeam sizing by Tim Smith, I have pulled together my notes from the session and reviewed the Veeam documentation to give the following template for considering Veeam sizing decisions.  

Veeam sizing session from Veeamon

Veeam Components

The key components of the Veeam infrastructure you will have to consider in your sizing exercise are as follows:

  • Veeam backup server
  • Database server
  • Proxy Server
  • Repository server
  • Enterprise Manager server

Information Required

In order to plan your Veeam backup sizing you will need to gather some information

  • Size of source data – Required to calculate size of full backup
  • Daily rate of change – This is required to understand the size of an incremental backup. Can either be calculated assuming change rate as 10% of your total data size or from existing backup software
  • Number of VMs and disks to be protected and number of retention points required will also be needed for disk capacity planning which I will cover in a future post.

Sizing Veeam Considerations

Some considerations when sizing your Veeam environment:

  • All sizing information is contained in the user guide and best practice guide
  • Some sizing requirements refer to “Per Task”. Each disk that is backed up is a task this is not a per VM measure
  • Sizing requirements are specific to each Veeam infrastructure component
  • Sizing for each role is cumulative. For example if the server is both a Proxy and Tape server you will need to add the resource requirements for both together
  • The sizing information given in the user guide are minimums
  • The best practice guide estimates that doubling resources will half the backup window. Although this will of course be dependent on an any other bottlenecks
  • Staggering jobs can help to reduce resource requirements since some components are sized on the basis of number of concurrent jobs
  • Exact requirements will be specific to your environment, start with the recommended values access if it is meeting your requirements and amend as necessary. Then retest until you are satisfied with the result

Veeam Component Resource Requirements

Backup Server

  • 1 CPU core for every 10 actively running jobs
  • 4 GB RAM for every 10 actively running jobs
  • Minimum is 2 CPU cores and 8 GB RAM.
  • Disk space
    • Installation – 40GB
    • Logs – 3 GB log files generated per 100 protected instances, with a 24 hour RPO

Database Server

SQL Express is bundled with Veeam.  Consider using the full blown SQL if:

  • Protecting more than 500 VMs
  • Using files to tape extensively

When using the full version of SQL:

  • Up to 25 concurrent jobs – 2 CPUs, 4GB RAM
  • Up to 50 concurrent jobs – 4 CPUs, 8GB RAM
  • Up to 25 concurrent jobs – 8 CPUs, 16GB RAM

Backup and Replication Console

  • 1 CPU
  • 2 GB RAM

Backup Proxy Server

The proxy server is sized per task, a task is defined as a VM hard drive.  In effect this setting will determine the number of disks and therefore VMs that can be backed up at once

  • 1 CPU core per task
  • 2GB per task
  • 500MB of disk space per task

Backup Repository Server

This is again sized per task

  • 1 CPU core per task
  • 4GB per task
  • Detailed sizing will

Veeam Backup Enterprise Manager

  • Memory 4GB minimum
  • 2GB of hard disk space

Final Thoughts

Remember these figures are general guidelines ensure you account for time in your project to test and optimise resource allocation.

I will cover sizing for backup repositories in a future post.

VeaamON – Keynote Summary

Data Importance

  • Data is the life blood of an organisation
  • It is a necessity for data to be always available
  • We are experiencing an unprecedented level of data growth
  • Companies need a trusted data partner

Veeam achievements

  • Veeam is now a plus 1 billion dollars company in terms of annual revenue
  • They have now acquired more than 350 thousand customers and continue to grow

Trends

  • Change is constant
  • Cloud, mobile, edge IoT are current industry trends and have a direct bearing on data management
  • Microsoft took to the stage and amongst other things discussed new storage technologies they are investigating to deal with massive data volumes that are being produced. Project being investigated include storing data on glass and within DNA!

Vision

  • Veeam were a key player in the virtualised data centre and now intend to repeat that with hybrid cloud data management
  • Veeam estimate 73% of its own customers operate a hybrid cloud strategy
  • They intend to keep to the core Veeam qualities of simplicity, reliability and flexibility
  • Vision statement – “To be the most trusted provider of backup solutions that deliver cloud data management”
  • Veeam will offer a data management solution for virtual, physical, SAAS and cloud work loads.

Technical Announcements

  • With Veeam – This new programme will bundle Veeam with HCI and storage vendors hardware together to allow tighter integration, more features and faster deployment.  The products will be able to be ordered with a single SKU and there will be a single point of contact for support  
  • Nutanix Mine – Initially announced at Nitanix.NEXT is a turnkey solution offering Nutanix and Veeam Availability.  Nutanix Mine is the initial offering of With Veeam along with Exagrid.  Mine is currently in beta testing with select customers
  • Veeam Orchestrator v2 – The automaton and orchestration software expands its capabilities including:
    • Audit and remediate SLA compliance
    • Automatically leverage both backup and replica protection data for use cases such as DevOps, patch and upgrade testing, analytics and more.
    • Role based access control to allow business units to perform their own testing
  • Scale out repository – coming in the future is the ability to use cloud object store as a direct target for backups
veeamon miami keynote