esxi health status displayed in VxRail manager

VxRail in a nutshell

Dell EMC’s VxRail is selling like hotcakes,  I was lucky enough recently to attend a one day overview session on the product.  The training was really good and I wanted to share what I had learnt to present a high level overview on what VxRail is about.

What is VxRail?

VxRail is Dell EMC’s hyper-converged offering. Hyper-converged appliances allow a datacenter in a box approach,  rather than buying servers, storage and hypervisor separately hyper-converged appliances bundle the components into one package.  The storage, compute and hypervisor components used by VXRail are as follows:

  • Storage – VMware vSAN 6.6
  • Compute – 14th Generation DellEMC PowerEdge servers
  • Hypervisor – vSphere 6.5

Together the above form VXRail 4.5

What else do you get with VXRail?

You also get some other software bundled in:

  • vCentre
  • vRealize log insight
  • RecoverPoint for VM’s
  • vSphere Replication
  • vSphere data protection

It is worth noting the ESXi licenses are not included

What is VxRack?

You may also hear VxRack mentioned, this is the bigger brother to VxRail with the software defined storage provided by ScaleIO.  Networking using VMware NSX is also an option.

How many nodes do you need?

The minimum number of nodes required is three,  although 4 is recommended to allow overhead for failures and maintenance.   This is also the minimum number of nodes required to use erasure coding rather than mirroring for data protection.

How do you manage VxRail?

The system is managed from two places.  You will be spending most of your time in the vSphere web console since all vSphere management is still performed from here. Also since the storage is being provided by vSAN this is also managed within vSphere.  The second tool you will need to become familiar with is VxRail Manager.

vSphere Integration

There is the option to create a new vCentre which will be housed within the hyper-converged infrastructure it’s self or to use an external vCentre.  The choice to use an internal or external vCentre can be set during the initial deployment wizard.

What is VxRail Manager?

The VxRail manager allows you to manage the hardware i.e. the servers in a VxRail deployment plus perform a number of other tasks including:

  • Deployment – Initial deployment and addition of nodes
  • Update – As this is a hyper-converged system upgrade of all components can be completed from the VxRail manager.
  • Monitor – Check for events, and monitor resource usage and component status
  • Maintain – Dial home support

The following shows a number of screenshots from within VxRail Manager

Physical node view

vxrail physical health

Logical node view showing resource useage

vxrail logical health

ESXi component status

esxi health status displayed in VxRail manager

What are the models?

You can see detailed specifications of all the models on the DellEMC site, this section just provides the major differences between the series

  • S Series – Hybrid only
  • G Series – Space saving format 2U can contain 4 appliances
  • E Series – 1U hence supports less capacity than the other 2U nodes
  • P Series –  2U supports twice the capacity of E series and is therefore suitable to more demanding workloads
  • V Series – Same spec as P series plus GPU’s. Optimised for VDI environments

How do you add a node

You add a node using the VxRail Manager as shown in the screenshot below which is a non disruptive process. Hybrid and all flash models cannot be mixed plus the first three nodes in any cluster must be identical after this you can mix and match models . Although there is something to be said for maintaining consistency across the cluster so it is balanced and will probably make managing it easier in the future. The cluster can be scaled to a max of 64 nodes.

Adding node to VxRail via VxRail Manager

How do I take a node out for maintenance?

Since the data is stored inside each of the nodes there are some additional considerations when putting a node into maintenance versus using a traditional SAN.  When you do put a host into maintenance mode the default option is ensure accessibility it makes sure all data is available during maintenance, although redundancy may be reduced.

vSAN summary

vSAN is VMware’s software defined storage solution, this means no specialist storage equipment is required and storage is provided by conglomerating the storage within each of the ESXi servers. Key concepts to understand

  • Management  – vSAN is managed from within vSphere and is enabled at the cluster level  
  • Disk groups – Each disk group consists of a caching disk which must be an SSD disk and 1-7 capacity drives.  The capacity drives can be flash drives or spinning disks in a hybrid setup. All writes are made via the caching disk, 30% of its capacity is also reserved to act as a read cache.
  • vSAN datastore – Disk groups are combined together to create a single usable pool of storage called a vSAN datastore
  • Policy Driven – Different performance and availability characteristics can be applied at a per disk level on VMs

vSAN availability and data reduction

  • Dedupe and compression –  Enabled at the cluster level, not suitable for all workloads. If you have a requirement for workloads that do do not require dedupe/compression you would need multiple clusters
  • Availability –
    • Availability and performance levels are set by creating policies, you can have multiple policies on a single vSAN datastore
    • Availability is defined in the policy setting fault tolerance method, the available choices are RAID-1 (Mirroring) and RAID-5/6 (Erasure Coding)
    • RAID 1 is more suited to high performance workloads and will ensure there are two copies of the data across nodes
    • RAID 5 – Stripes the data across nodes, more space efficient but reduced performance.

 

 

Death of the SAN?

Recently market analyst Wikibon predicted the death of the traditional SAN market in the next 10 years. The resuts of the report were summarised in The Register with the headline news being that existing SAN, NAS and DAS revenues will drop to almost 90% of their current levels in 10 years. So what is causing this change in the fortune of traditional SANs, hyper-converged storage says Wikibon. Hyper-converged storage is a solution that pools storage from inside servers and presents it out as shared storage.

It’s well documented that the traditional SAN’s revenues are being squeezed by the cloud, and newer solutions to storage such as hyper-converged and software defined.  However, the dominance of hyper-converged that is being predicted just doesn’t make sense let’s look at why.

All or Nothing

We technical people love to see things in absolutes is it on or off, fast or slow a one or a zero.  However in reality life is not like that, there are in-betweens or ‘grey areas’. This is the same with product choice rarely will a product be the right choice for almost everyone, there will be certain circumstances and use cases it is more adapted to than others.
The marketing message from the hyper-converged players has been no SAN, again back to the IT style of all or nothing thinking. But a product type claiming to be the fix-all solution to all problems sets alarm bells ringing to me.  I get visions of the latest fitness gadgets you see on selling channels: “The arm-blaster 3000 the only fitness device you will ever need, works the whole body, relaxes the mind, and makes you a cup of tea at the same time.” As much as the marketing men would like us to believe it there is no fit all solution for any product in any industry, it’s about choice and use cases.

bull worker

A compelling case

A product needs to be truly compelling to get people to get people to leave their exiting solution and move across to another. Virtualisation was an example of one of the most compelling cases for changing technology in the last few years and possibly ever. Who wouldn’t want to reduce their datacentre foot print in turn reducing cooling/ power costs, obtaining greater flexibility and agility with greater uptime? Being a salesman for VMware during those days of mass conversion from physical to virtual must have been a dream, as the customer really had to answer the question why wouldn’t we do this?

I do not believe that the use case to swap to hyper-converged is anywhere near as strong. Will this improve performance, availability and reduce costs for every customer like VMware did? Unlikely. It will certainly have use cases where hyper-converged will be a better choice but not so broadly across the market that it will end up accounting for 90% of the market as Wikibon predict.

no san

Innovation

What any report that aims to predict into the next decade fails to take account of is innovations that occur during that period and can be game changers. These are often difficult to spot even when you are a key play such as Bill Gates almost missing the importance of the internet and Nokia completely missing smartphones. As yet we do not know how the development of other storage technologies like Object based and cloud computing will shape the future. Add to this true innovations in the pipe line like HP’s memristor and other memory based storage technologies under development and things could be more different in 10 years than any of us could guess.

Closeup of HP's Memristor devices on a 300mm wafer.

World Peace

Well world peace may be a little ambitious but I think it’s fair to say traditional SAN’s should be able to co-exist and indeed benefit from the development of hyper-converged and other new storage technologies. Rather than all sides claiming their product is the only way I would like to see developments that allow the two to co-exist with each other. SANs and hyper-converged have different use cases and are not mutually exclusive, so value is to be added to the customer by developing co-existence between the two. Co-existence could for example be traditional SAN and hyper-converged using the same management tools or data replication between the two, this would make more sense for the customer rather than having to chose between the technologies. Software defined storage was supposed to move us away from islands of storage, not create them by drawing even deeper lines in the sand.

dove

Conclusion

The free market at its core is about competition, choice and innovation.  Never has the storage industry been more competitive with the era of SSD’s and software defined bringing many new entrants each with a different approach. Hyper-convergence is one of the big success stories coming out of this new era of storage and it absolutely appears to be here to stay and will continue to grow its market share. But I do not foresee it achieving near market saturation, as that would bring very little benefit to the consumer, long live innovation and choice.

 

To stay in touch with more content connect with me on LinkedIn and Twitter.